[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181013073004.GA29921@kroah.com>
Date: Sat, 13 Oct 2018 09:30:04 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Gao Xiang <hsiangkao@....com>
Cc: Philippe Ombredanne <pombredanne@...b.com>,
Kate Stewart <kstewart@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Miao Xie <miaoxie@...wei.com>,
Chao Yu <chao@...nel.org>
Subject: Re: [RFC PATCH] bit_spinlock: introduce smp_cond_load_relaxed
On Sat, Oct 13, 2018 at 03:22:08PM +0800, Gao Xiang wrote:
> Hi Greg,
>
> On 2018/10/13 15:04, Greg Kroah-Hartman wrote:
> > On Sat, Oct 13, 2018 at 02:47:29PM +0800, Gao Xiang wrote:
> >> It is better to use smp_cond_load_relaxed instead
> >> of busy waiting for bit_spinlock.
> >
> > Why? I think we need some kind of "proof" that this is true before
> > being able to accept a patch like this, don't you agree?
>
> There are some materials which discuss smp_cond_load_* earlier.
> https://patchwork.kernel.org/patch/10335991/
> https://patchwork.kernel.org/patch/10325057/
>
> In ARM64, they implements a function called "cmpwait", which uses
> hardware instructions to monitor a value change, I think it is more
> energy efficient than just do a open-code busy loop...
>
> And it seem smp_cond_load_* is already used in the current kernel, such as:
> ./kernel/locking/mcs_spinlock.h
> ./kernel/locking/qspinlock.c
> ./kernel/sched/core.c
> ./kernel/smp.c
>
> For other architectures like x86/arm64, I think they could implement
> smp_cond_load_* later.
And have you benchmarked this change to show that it provides any
benifit?
You need to do that...
thanks,
greg k-h
Powered by blists - more mailing lists