[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181105065431.GF15378@yi.y.sun>
Date: Mon, 5 Nov 2018 14:54:31 +0800
From: Yi Sun <yi.y.sun@...ux.intel.com>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, tglx@...utronix.de, chao.p.peng@...el.com,
chao.gao@...el.com, isaku.yamahata@...el.com,
michael.h.kelley@...rosoft.com, tianyu.lan@...rosoft.com,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"mingo@...hat.com" <mingo@...hat.com>,
Will Deacon <will.deacon@....com>
Subject: Re: [PATCH v1 2/2] x86/hyperv: make HvNotifyLongSpinWait hypercall
On 18-11-01 08:59:08, Waiman Long wrote:
> On 10/31/2018 11:20 PM, Yi Sun wrote:
> > On 18-10-31 18:15:39, Peter Zijlstra wrote:
> >> On Wed, Oct 31, 2018 at 11:07:22AM -0400, Waiman Long wrote:
> >>> On 10/31/2018 10:10 AM, Peter Zijlstra wrote:
> >>>> On Wed, Oct 31, 2018 at 09:54:17AM +0800, Yi Sun wrote:
> >>>>> On 18-10-23 17:33:28, Yi Sun wrote:
> >>>>>> On 18-10-23 10:51:27, Peter Zijlstra wrote:
> >>>>>>> Can you try and explain why vcpu_is_preempted() doesn't work for you?
> >>>>>> I thought HvSpinWaitInfo is used to notify hypervisor the spin number
> >>>>>> which is different with vcpu_is_preempted. So I did not consider
> >>>>>> vcpu_is_preempted.
> >>>>>>
> >>>>>> But HvSpinWaitInfo is a quite simple function and could be combined
> >>>>>> with vcpu_is_preempted together. So I think it is OK to use
> >>>>>> vcpu_is_preempted to make codes clean. I will have a try.
> >>>>> After checking codes, there is one issue to call vcpu_is_preempted.
> >>>>> There are two spin loops in qspinlock_paravirt.h. One loop in
> >>>>> 'pv_wait_node' calls vcpu_is_preempted. But another loop in
> >>>>> 'pv_wait_head_or_lock' does not call vcpu_is_preempted. It also does
> >>>>> not call any other ops of 'pv_lock_ops' in the loop. So I am afraid
> >>>>> we have to add one more ops in 'pv_lock_ops' to do this.
> >>>> Why? Would not something like the below cure that? Waiman, can you have
> >>>> a look at this; I always forget how that paravirt crud works.
> >>> There are two major reasons why the vcpu_is_preempt() test isn't done at
> >>> pv_wait_head_or_lock(). First of all, we may not have a valid prev
> >>> pointer after all if it is the first one to enter the queue while the
> >>> lock is busy. Secondly, because of lock stealing, the cpu number pointed
> >>> by a valid prev pointer may not be the actual cpu that is currently
> >>> holding the lock. Another minor reason is that we want to minimize the
> >>> lock transfer latency and so don't want to sleep too early while waiting
> >>> at the queue head.
> >> So Yi, are you actually seeing a problem? If so, can you give details?
> > Where does the patch come from? I cannot find it through google.
> >
> > Per Waiman's comment, it seems not suitable to call vcpu_is_preempted()
> > in pv_wait_head_or_lock(). So, we cannot make HvSpinWaitInfo notification
> > through vcpu_is_preempted() for such case. Based on that, I suggest to
> > add one more callback function in pv_lock_ops.
>
> I am hesitant to add any additional check at the spinning loop in
> pv_wait_head_or_lock() especially one that is a hypercall or a callback
> that will take time to execute. The testing that I had done in the past
> indicated that it would slow down locking performance especially if the
> VM wasn't overcommitted at all.
>
> Any additional slack in pv_wait_node() can be mitigated by the lock
> stealing that can happen. Slack in pv_wait_head_or_lock(), on the other
> hand, will certainly increase the lock transfer latency and impact
> performance. So you need performance data to show that it is worthwhile
> to do so.
>
Ok, I will make performance test to show if it is worthwhile to call
SpinWaitInfo in pv_wait_head_or_lock.
> As for performance test, the kernel has a builtin locktorture test if
> you configured it in. So show us the performance data with and without
> the patch.
Thank you! I will make performance test for whole patch.
>
> Cheers,
> Longman
Powered by blists - more mailing lists