[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mtu2vhzz.fsf@linux.intel.com>
Date: Mon, 12 Apr 2021 23:03:12 -0700
From: Andi Kleen <ak@...ux.intel.com>
To: Alex Kogan <alex.kogan@...cle.com>
Cc: linux@...linux.org.uk, peterz@...radead.org, mingo@...hat.com,
will.deacon@....com, arnd@...db.de, longman@...hat.com,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
hpa@...or.com, x86@...nel.org, guohanjun@...wei.com,
jglauber@...vell.com, steven.sistare@...cle.com,
daniel.m.jordan@...cle.com, dave.dice@...cle.com
Subject: Re: [PATCH v14 4/6] locking/qspinlock: Introduce starvation avoidance into CNA
Alex Kogan <alex.kogan@...cle.com> writes:
>
> + numa_spinlock_threshold= [NUMA, PV_OPS]
> + Set the time threshold in milliseconds for the
> + number of intra-node lock hand-offs before the
> + NUMA-aware spinlock is forced to be passed to
> + a thread on another NUMA node. Valid values
> + are in the [1..100] range. Smaller values result
> + in a more fair, but less performant spinlock,
> + and vice versa. The default value is 10.
ms granularity seems very coarse grained for this. Surely
at some point of spinning you can afford a ktime_get? But ok.
Could you turn that into a moduleparm which can be changed at runtime?
Would be strange to have to reboot just to play with this parameter
This would also make the code a lot shorter I guess.
-Andi
Powered by blists - more mailing lists