[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180524211405.GA7206@andrea>
Date: Thu, 24 May 2018 23:14:05 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Will Deacon <will.deacon@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
psodagud@...eaurora.org, Kees Cook <keescook@...omium.org>,
Andy Lutomirski <luto@...capital.net>,
Will Drewry <wad@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Eric Biggers <ebiggers@...gle.com>,
Frederic Weisbecker <fweisbec@...il.com>, sherryy@...roid.com,
Vegard Nossum <vegard.nossum@...cle.com>,
Christoph Lameter <cl@...ux.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Sasha Levin <alexander.levin@...izon.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
paulmck@...ux.vnet.ibm.com, stern@...land.harvard.edu
Subject: Re: write_lock_irq(&tasklist_lock)
> Yeah, lemme put some details here:
>
> So we have three cases:
>
> Case #1 (from Will)
>
> P0: P1: P2:
>
> spin_lock(&slock) read_lock(&rwlock)
> write_lock(&rwlock)
> read_lock(&rwlock) spin_lock(&slock)
>
> , which is a deadlock, and couldn't not be detected by lockdep yet. And
> lockdep could detect this with the patch I attach at the end of the
> mail.
>
> Case #2
>
> P0: P1: P2:
>
> <in irq handler>
> spin_lock(&slock) read_lock(&rwlock)
> write_lock(&rwlock)
> read_lock(&rwlock) spin_lock_irq(&slock)
>
> , which is not a deadlock, as the read_lock() on P0 can use the unfair
> fastpass.
>
> Case #3
>
> P0: P1: P2:
>
> <in irq handler>
> spin_lock_irq(&slock) read_lock(&rwlock)
> write_lock_irq(&rwlock)
> read_lock(&rwlock) spin_lock(&slock)
>
> , which is a deadlock, as the read_lock() on P0 cannot use the fastpass.
Mmh, I'm starting to think that, maybe, we need a model (a tool) to
distinguish these and other(?) cases (sorry, I could not resist ;-)
[...]
> ------------------->8
> Subject: [PATCH] locking: More accurate annotations for read_lock()
>
> On the archs using QUEUED_RWLOCKS, read_lock() is not always a recursive
> read lock, actually it's only recursive if in_interrupt() is true. So
Mmh, taking the "common denominator" over archs/Kconfig options and
CPU states, this would suggest that read_lock() is non-recursive;
it looks like I can say "good-bye" to my idea to define (formalize)
consistent executions/the memory ordering of RW-LOCKS "by following"
the following _emulation_:
void read_lock(rwlock_t *s)
{
r0 = atomic_fetch_inc_acquire(&s->val);
}
void read_unlock(rwlock_t *s)
{
r0 = atomic_fetch_sub_release(&s->val);
}
void write_lock(rwlock_t *s)
{
r0 = atomic_cmpxchg_acquire(&s->val, 0, -1);
}
void write_unlock(rwlock_t *s)
{
atomic_set_release(&s->val, 0);
}
filter (~read_lock:r0=-1 /\ write_lock:r0=0)
[...]
> The code is done, I'm just working on the rework for documention stuff,
> so if anyone is interested, could try it out ;-)
Any idea on how to "educate" the LKMM about this code/documentation?
Andrea
Powered by blists - more mailing lists