lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANDhNCo4ruC4pP+iDe49b3e1nAcWtYQj4bx82+oZhyLFYkdFJQ@mail.gmail.com>
Date:   Thu, 2 Mar 2023 11:39:12 -0800
From:   John Stultz <jstultz@...gle.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     LKML <linux-kernel@...r.kernel.org>, Wei Wang <wvw@...gle.com>,
        Midas Chien <midaschieh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Anton Vorontsov <anton@...msg.org>,
        "Guilherme G. Piccoli" <gpiccoli@...lia.com>,
        Tony Luck <tony.luck@...el.com>, kernel-team@...roid.com,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH] pstore: Revert pmsg_lock back to a normal mutex

On Thu, Mar 2, 2023 at 5:24 AM Steven Rostedt <rostedt@...dmis.org> wrote:
>
> On Thu,  2 Mar 2023 06:27:41 +0000
> John Stultz <jstultz@...gle.com> wrote:
>
> > This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721.
> >
> > So while priority inversion on the pmsg_lock is an occasional
> > problem that an rt_mutex would help with, in uses where logging
> > is writing to pmsg heavily from multiple threads, the pmsg_lock
> > can be heavily contended.
> >
> > Normal mutexes can do adaptive spinning, which keeps the
> > contention overhead fairly low maybe adding on the order of 10s
> > of us delay waiting, but the slowpath w/ rt_mutexes makes the
> > blocked tasks sleep & wake. This makes matters worse when there
> > is heavy contentention, as it just allows additional threads to
> > run and line up to try to take the lock.
> >
> > It devolves to a worse case senerio where the lock acquisition
> > and scheduling overhead dominates, and each thread is waiting on
> > the order of ~ms to do ~us of work.
> >
> > Obviously, having tons of threads all contending on a single
> > lock for logging is non-optimal, so the proper fix is probably
> > reworking pstore pmsg to have per-cpu buffers so we don't have
> > contention.
>
> Or perhaps we should convert rt_mutex to have adaptive spinning too. This
> will likely be needed for PREEMPT_RT anyway. IIRC, in the PREEMPT_RT patch,
> only the spinlock converted rt_mutexes used adaptive spinning and the
> argument against converting the mutex to rt_mutex to adaptive spinning was
> because the normal one (at that time) did not have it, and we wanted to
> keep it the same as mainline. But it appears that that reason is no longer
> the case, and perhaps the real answer is to have all mutexes have adaptive
> spinning?

This sounds like something to try as well (though I'd still want to
push this revert in the meantime to avoid regressions).

Do you have a more specific pointer to this adaptive spinning
rt_mutexes for spinlocks? Looking at the current PREEMPT_RT patch I'm
not seeing any changes to locking.

thanks
-john

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ