[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <57AE54AC.1010800@hpe.com>
Date: Fri, 12 Aug 2016 18:58:52 -0400
From: Waiman Long <waiman.long@....com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
<x86@...nel.org>, Borislav Petkov <bp@...e.de>,
Andy Lutomirski <luto@...nel.org>,
Prarit Bhargava <prarit@...hat.com>,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>,
Randy Wright <rwright@....com>
Subject: Re: [PATCH v5] x86/hpet: Reduce HPET counter read contention
On 08/12/2016 05:44 PM, Dave Hansen wrote:
> On 08/12/2016 02:25 PM, Waiman Long wrote:
>> + do {
>> + cpu_relax();
>> + new.lockval = READ_ONCE(hpet.lockval);
>> + } while ((new.value == old.value)&& raw_spin_is_locked(&new.lock));
> While it gets more far-fetched, this isn't guaranteed to make progress
> until the saved HPET value actually changes. You could have a constant
> stream of other CPUs going and doing hpet_readl() (and getting the same
> value back from a sloooow HPET). So each time through this loop, this
> processor sees (new.value == old.value), and sees the lock held.
That is the point. All CPUs that try to read the HPET simultaneously
will get the same value back instead of waiting in line to get a
slightly different value. They do need to wait until the lock holder
read the new HPET value. If you have n CPUs trying to read HPET and the
read latency is T, the latency for all the CPUs to read it is just T
instead of a worst case latency of nT or an average of (n+1)T/2.
What we don't want to see is to return some stale value that make it
looks like we go backward in time.
Cheers,
Longman
Powered by blists - more mailing lists