[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87im4qvhky.fsf@linux.intel.com>
Date: Mon, 12 Apr 2021 23:12:13 -0700
From: Andi Kleen <ak@...ux.intel.com>
To: Alex Kogan <alex.kogan@...cle.com>
Cc: linux@...linux.org.uk, peterz@...radead.org, mingo@...hat.com,
will.deacon@....com, arnd@...db.de, longman@...hat.com,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
hpa@...or.com, x86@...nel.org, guohanjun@...wei.com,
jglauber@...vell.com, steven.sistare@...cle.com,
daniel.m.jordan@...cle.com, dave.dice@...cle.com
Subject: Re: [PATCH v14 4/6] locking/qspinlock: Introduce starvation avoidance into CNA
Andi Kleen <ak@...ux.intel.com> writes:
> Alex Kogan <alex.kogan@...cle.com> writes:
>>
>> + numa_spinlock_threshold= [NUMA, PV_OPS]
>> + Set the time threshold in milliseconds for the
>> + number of intra-node lock hand-offs before the
>> + NUMA-aware spinlock is forced to be passed to
>> + a thread on another NUMA node. Valid values
>> + are in the [1..100] range. Smaller values result
>> + in a more fair, but less performant spinlock,
>> + and vice versa. The default value is 10.
>
> ms granularity seems very coarse grained for this. Surely
> at some point of spinning you can afford a ktime_get? But ok.
Actually thinking about it more using jiffies is likely broken
anyways because if the interrupts are disabled and the CPU
is running the main timer interrupts they won't increase.
cpu_clock (better than ktime_get) or sched_clock would work.
-Andi
Powered by blists - more mailing lists