[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57AE52CE.3040302@hpe.com>
Date: Fri, 12 Aug 2016 18:50:54 -0400
From: Waiman Long <waiman.long@....com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
<x86@...nel.org>, Borislav Petkov <bp@...e.de>,
Andy Lutomirski <luto@...nel.org>,
Prarit Bhargava <prarit@...hat.com>,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>,
Randy Wright <rwright@....com>
Subject: Re: [PATCH v5] x86/hpet: Reduce HPET counter read contention
On 08/12/2016 05:38 PM, Dave Hansen wrote:
> On 08/12/2016 02:25 PM, Waiman Long wrote:
>> + * The lock and the hpet value are stored together and can be read in a
>> + * single atomic 64-bit read. It is explicitly assumed that the raw spinlock
>> + * size is 32-bit.
> So what happens when we have all the fun debugging options on?
>
>> typedef struct raw_spinlock {
>> arch_spinlock_t raw_lock;
>> #ifdef CONFIG_GENERIC_LOCKBREAK
>> unsigned int break_lock;
>> #endif
>> #ifdef CONFIG_DEBUG_SPINLOCK
>> unsigned int magic, owner_cpu;
>> void *owner;
>> #endif
>> #ifdef CONFIG_DEBUG_LOCK_ALLOC
>> struct lockdep_map dep_map;
>> #endif
>> } raw_spinlock_t;
Sorry, it should be arch_spinlock_t instead. Will fix that.
Cheers,
Longman
Powered by blists - more mailing lists