lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 30 Aug 2008 13:08:17 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Gregory Haskins <ghaskins@...ell.com>
Cc:	mingo@...e.hu, rostedt@...dmis.org, tglx@...utronix.de,
	linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
	gregory.haskins@...il.com
Subject: Re: [PATCH] seqlock: serialize against writers

On Fri, 2008-08-29 at 11:44 -0400, Gregory Haskins wrote:
> *Patch submitted for inclusion in PREEMPT_RT 26-rt4.  Applies to 2.6.26.3-rt3*
> 
> Hi Ingo, Steven, Thomas,
>   Please consider for -rt4.  This fixes a nasty deadlock on my systems under
>   heavy load.
> 
> -Greg
> 
> ----
> 
> Seqlocks have always advertised that readers do not "block", but this was
> never really true.  Readers have always logically blocked at the head of
> the critical section under contention with writers, regardless of whether
> they were allowed to run code or not.
> 
> Recent changes in this space (88a411c07b6fedcfc97b8dc51ae18540bd2beda0)
> have turned this into a more explicit blocking operation in mainline.
> However, this change highlights a short-coming in -rt because the
> normal seqlock_ts are preemptible.  This means that we can potentially
> deadlock should a reader spin waiting for a write critical-section to end
> while the writer is preempted.

I think the technical term is livelock.

So the problem is that the write side gets preempted, and the read side
spins in a non-preemptive fashion?

Looking at the code, __read_seqbegin() doesn't disable preemption, so
even while highly inefficient when spinning against a preempted lock, it
shouldn't livelock, since the spinner can get preempted giving the
writer a chance to finish.

> This patch changes the internal implementation to use a rwlock and forces
> the readers to serialize with the writers under contention.  This will
> have the advantage that -rt seqlocks_t will sleep the reader if deadlock
> were imminent, and it will pi-boost the writer to prevent inversion.
> 
> This fixes a deadlock discovered under testing where all high prioritiy
> readers were hogging the cpus and preventing a writer from releasing the
> lock.
> 
> Since seqlocks are designed to be used as rarely-write locks, this should
> not affect the performance in the fast-path

Not quite, seqlocks never suffered the cacheline bounce rwlocks have -
which was they strongest point - so I very much not like this change.

As to the x86_64 gtod-vsyscall, that uses a raw_seqlock_t on -rt, which
is still non-preemptable and should thus not be affected by this
livelock scenario.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists