lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Sep 2020 21:10:59 +0800
From:   Boqun Feng <boqun.feng@...il.com>
To:     Qian Cai <cai@...hat.com>
Cc:     "Ahmed S. Darwish" <a.darwish@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Sebastian A. Siewior" <bigeasy@...utronix.de>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        linux-next@...r.kernel.org, Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v2 0/5] seqlock: Introduce PREEMPT_RT support

On Tue, Sep 15, 2020 at 08:48:17PM +0800, Boqun Feng wrote:
> On Mon, Sep 14, 2020 at 08:20:53PM -0400, Qian Cai wrote:
> > On Fri, 2020-09-04 at 17:32 +0200, Ahmed S. Darwish wrote:
> > > Hi,
> > > 
> > > Changelog-v2
> > > ============
> > > 
> > >   - Standardize on seqcount_LOCKNAME_t as the canonical reference for
> > >     sequence counters with associated locks, instead of v1
> > >     seqcount_LOCKTYPE_t.
> > > 
> > >   - Use unique prefix "seqprop_*" for all seqcount_t/seqcount_LOCKNAME_t
> > >     property accessors.
> > > 
> > >   - Touch-up the lock-unlock rationale for more clarity. Enforce writer
> > >     non-preemitiblity using "__seq_enforce_writer_non_preemptibility()".
> > > 
> > > Cover letter (v1)
> > > =================
> > > 
> > > https://lkml.kernel.org/r/20200828010710.5407-1-a.darwish@linutronix.de
> > > 
> > > Preemption must be disabled before entering a sequence counter write
> > > side critical section.  Otherwise the read side section can preempt the
> > > write side section and spin for the entire scheduler tick.  If that
> > > reader belongs to a real-time scheduling class, it can spin forever and
> > > the kernel will livelock.
> > > 
> > > Disabling preemption cannot be done for PREEMPT_RT though: it can lead
> > > to higher latencies, and the write side sections will not be able to
> > > acquire locks which become sleeping locks (e.g. spinlock_t).
> > > 
> > > To remain preemptible, while avoiding a possible livelock caused by the
> > > reader preempting the writer, use a different technique: let the reader
> > > detect if a seqcount_LOCKNAME_t writer is in progress. If that's the
> > > case, acquire then release the associated LOCKNAME writer serialization
> > > lock. This will allow any possibly-preempted writer to make progress
> > > until the end of its writer serialization lock critical section.
> > > 
> > > Implement this lock-unlock technique for all seqcount_LOCKNAME_t with
> > > an associated (PREEMPT_RT) sleeping lock, and for seqlock_t.
> > 
> > Reverting this patchset [1] from today's linux-next fixed a splat below. The
> > splat looks like a false positive anyway because the existing locking dependency
> > chains from the task #1 here:
> > 
> > &s->seqcount#2 ---> pidmap_lock
> > 
> > [  528.078061][ T7867] -> #1 (pidmap_lock){....}-{2:2}:
> > [  528.078078][ T7867]        lock_acquire+0x10c/0x560
> > [  528.078089][ T7867]        _raw_spin_lock_irqsave+0x64/0xb0
> > [  528.078108][ T7867]        free_pid+0x5c/0x160
> > free_pid at kernel/pid.c:131
> > [  528.078127][ T7867]        release_task.part.40+0x59c/0x7f0
> > __unhash_process at kernel/exit.c:76
> > (inlined by) __exit_signal at kernel/exit.c:147
> > (inlined by) release_task at kernel/exit.c:198
> > [  528.078145][ T7867]        do_exit+0x77c/0xda0
> > exit_notify at kernel/exit.c:679
> > (inlined by) do_exit at kernel/exit.c:826
> > [  528.078163][ T7867]        kthread+0x148/0x1d0
> > [  528.078182][ T7867]        ret_from_kernel_thread+0x5c/0x80
> > 
> > It is write_seqlock(&sig->stats_lock) in __exit_signal(), but the &s->seqcount#2 
> > in read_mems_allowed_begin() is read_seqcount_begin(&current->mems_allowed_seq), 
> > so there should be no deadlock?
> > 
> 
> I think this happened because seqcount_##lockname##_init() is defined at
> function rather than macro, so when the seqcount_init() gets expand in
> that function, the lock_class_key of seqcount will be a static variable
> of seqcount_##lockname##_init() function, as a result, all
> seqcount_##lockname##_t in the same compile unit (in this case it's
> kernel/fork.c) share the same lock class key, and lockdep thought they
> are the same lock ;-)
> 

Don't know how to fix this properly, but below is an ugly attemption,
only build test, just food for thoughts.

Regards,
Boqun

--------------->8
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index f73c7eb68f27..938a5053def3 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -84,14 +84,18 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
 # define SEQCOUNT_DEP_MAP_INIT(lockname)				\
 		.dep_map = { .name = #lockname }
 
+# define MSIOCU 8 /* MAX SEQCOUNT IN ON COMPILE UNIT */
 /**
  * seqcount_init() - runtime initializer for seqcount_t
  * @s: Pointer to the seqcount_t instance
  */
 # define seqcount_init(s)						\
 	do {								\
-		static struct lock_class_key __key;			\
-		__seqcount_init((s), #s, &__key);			\
+		static struct lock_class_key __key[MSIOCU];		\
+		static int idx = 0;					\
+									\
+		BUG_ON(idx >= MSIOCU);					\
+		__seqcount_init((s), #s, &__key[idx++]);		\
 	} while (0)
 
 static inline void seqcount_lockdep_reader_access(const seqcount_t *s)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ