lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210816050039.nyor4xtiet77cn7z@offworld>
Date:   Sun, 15 Aug 2021 22:00:39 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Mike Galbraith <efault@....de>
Subject: Re: [patch V5 18/72] locking: Add base code for RT rw_semaphore and
 rwlock

On Sun, 15 Aug 2021, Thomas Gleixner wrote:

>On PREEMPT_RT rw_semaphores and rwlocks are substituted with a rtmutex and
>a reader count. The implementation is writer unfair as it is not feasible
>to do priority inheritance on multiple readers, but experience has shown
>that realtime workloads are not the typical workloads which are sensitive
>to writer starvation.

Ok so on RT tasklist_lock (rwlock_t) semantics would be similar to non-RT's
behavior in irq context, being writer unfair. And yeah, as with mmap_sem,
many of the sources of starvation are well known and not specific to RT.

>+/*
>+ * RT-specific reader/writer semaphores and reader/writer locks
>+ *
>+ * down_write/write_lock()
>+ *  1) Lock rtmutex
>+ *  2) Remove the reader BIAS to force readers into the slow path
>+ *  3) Wait until all readers have left the critical region
>+ *  4) Mark it write locked
>+ *
>+ * up_write/write_unlock()
>+ *  1) Remove the write locked marker
>+ *  2) Set the reader BIAS so readers can use the fast path again
>+ *  3) Unlock rtmutex to release blocked readers
>+ *
>+ * down_read/read_lock()
>+ *  1) Try fast path acquisition (reader BIAS is set)
>+ *  2) Take tmutex::wait_lock which protects the writelocked flag
>+ *  3) If !writelocked, acquire it for read
>+ *  4) If writelocked, block on tmutex

s/tmutex/rtmutex

>+ *  5) unlock rtmutex, goto 1)
>+ *
>+static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb,
>+					 unsigned int state)
>+{
>+	struct rt_mutex_base *rtm = &rwb->rtmutex;
>+	struct task_struct *owner;
>+
>+	raw_spin_lock_irq(&rtm->wait_lock);
>+	/*
>+	 * Wake the writer, i.e. the rtmutex owner. It might release the
>+	 * rtmutex concurrently in the fast path (due to a signal), but to
>+	 * clean up rwb->readers it needs to acquire rtm->wait_lock. The
>+	 * worst case which can happen is a spurious wakeup.
>+	 */
>+	owner = rt_mutex_owner(rtm);
>+	if (owner)
>+		wake_up_state(owner, state);

Maybe use wake_q to avoid holding wait_lock throughout the wakeup?

>+
>+	raw_spin_unlock_irq(&rtm->wait_lock);
>+}

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ