lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230302082414.77613351@gandalf.local.home>
Date:   Thu, 2 Mar 2023 08:24:14 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     John Stultz <jstultz@...gle.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, Wei Wang <wvw@...gle.com>,
        Midas Chien <midaschieh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Anton Vorontsov <anton@...msg.org>,
        "Guilherme G. Piccoli" <gpiccoli@...lia.com>,
        Tony Luck <tony.luck@...el.com>, kernel-team@...roid.com,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH] pstore: Revert pmsg_lock back to a normal mutex

On Thu,  2 Mar 2023 06:27:41 +0000
John Stultz <jstultz@...gle.com> wrote:

> This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721.
> 
> So while priority inversion on the pmsg_lock is an occasional
> problem that an rt_mutex would help with, in uses where logging
> is writing to pmsg heavily from multiple threads, the pmsg_lock
> can be heavily contended.
> 
> Normal mutexes can do adaptive spinning, which keeps the
> contention overhead fairly low maybe adding on the order of 10s
> of us delay waiting, but the slowpath w/ rt_mutexes makes the
> blocked tasks sleep & wake. This makes matters worse when there
> is heavy contentention, as it just allows additional threads to
> run and line up to try to take the lock.
> 
> It devolves to a worse case senerio where the lock acquisition
> and scheduling overhead dominates, and each thread is waiting on
> the order of ~ms to do ~us of work.
> 
> Obviously, having tons of threads all contending on a single
> lock for logging is non-optimal, so the proper fix is probably
> reworking pstore pmsg to have per-cpu buffers so we don't have
> contention.

Or perhaps we should convert rt_mutex to have adaptive spinning too. This
will likely be needed for PREEMPT_RT anyway. IIRC, in the PREEMPT_RT patch,
only the spinlock converted rt_mutexes used adaptive spinning and the
argument against converting the mutex to rt_mutex to adaptive spinning was
because the normal one (at that time) did not have it, and we wanted to
keep it the same as mainline. But it appears that that reason is no longer
the case, and perhaps the real answer is to have all mutexes have adaptive
spinning?

-- Steve


> 
> But in the short term, lets revert the change to the rt_mutex
> and go back to normal mutexes to avoid a potentially major
> performance regression.
> 
> Cc: Wei Wang <wvw@...gle.com>
> Cc: Midas Chien<midaschieh@...gle.com>
> Cc: Chunhui Li (李春辉)" <chunhui.li@...iatek.com>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Kees Cook <keescook@...omium.org>
> Cc: Anton Vorontsov <anton@...msg.org>
> Cc: "Guilherme G. Piccoli" <gpiccoli@...lia.com>
> Cc: Tony Luck <tony.luck@...el.com>
> Cc: kernel-team@...roid.com
> Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion")
> Reported-by: Chunhui Li (李春辉)" <chunhui.li@...iatek.com>
> Signed-off-by: John Stultz <jstultz@...gle.com>
> ---
>  fs/pstore/pmsg.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
> index ab82e5f05346..b31c9c72d90b 100644
> --- a/fs/pstore/pmsg.c
> +++ b/fs/pstore/pmsg.c
> @@ -7,10 +7,9 @@
>  #include <linux/device.h>
>  #include <linux/fs.h>
>  #include <linux/uaccess.h>
> -#include <linux/rtmutex.h>
>  #include "internal.h"
>  
> -static DEFINE_RT_MUTEX(pmsg_lock);
> +static DEFINE_MUTEX(pmsg_lock);
>  
>  static ssize_t write_pmsg(struct file *file, const char __user *buf,
>  			  size_t count, loff_t *ppos)
> @@ -29,9 +28,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
>  	if (!access_ok(buf, count))
>  		return -EFAULT;
>  
> -	rt_mutex_lock(&pmsg_lock);
> +	mutex_lock(&pmsg_lock);
>  	ret = psinfo->write_user(&record, buf);
> -	rt_mutex_unlock(&pmsg_lock);
> +	mutex_unlock(&pmsg_lock);
>  	return ret ? ret : count;
>  }
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ