[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87v7wy2mbi.fsf@oracle.com>
Date: Thu, 07 Nov 2024 23:53:37 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: "Christoph Lameter (Ampere)" <cl@...two.org>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, linux-pm@...r.kernel.org,
kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
catalin.marinas@....com, will@...nel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, pbonzini@...hat.com,
vkuznets@...hat.com, rafael@...nel.org, daniel.lezcano@...aro.org,
peterz@...radead.org, arnd@...db.de, lenb@...nel.org,
mark.rutland@....com, harisokn@...zon.com, mtosatti@...hat.com,
sudeep.holla@....com, maz@...nel.org, misono.tomohiro@...itsu.com,
maobibo@...ngson.cn, zhenglifeng1@...wei.com,
joao.m.martins@...cle.com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH v9 01/15] asm-generic: add barrier
smp_cond_load_relaxed_timeout()
Christoph Lameter (Ampere) <cl@...two.org> writes:
> On Thu, 7 Nov 2024, Ankur Arora wrote:
>
>> +#ifndef smp_cond_time_check_count
>> +/*
>> + * Limit how often smp_cond_load_relaxed_timeout() evaluates time_expr_ns.
>> + * This helps reduce the number of instructions executed while spin-waiting.
>> + */
>> +#define smp_cond_time_check_count 200
>> +#endif
>
> I dont like these loops that execute differently depending on the
> hardware. Can we use cycles and ns instead to have defined periods of
> time? Later patches establish the infrastructure to convert cycles to
> nanoseconds and microseconds. Use that?
>
>> +#ifndef smp_cond_load_relaxed_timeout
>> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr_ns, \
>> + time_limit_ns) ({ \
>> + typeof(ptr) __PTR = (ptr); \
>> + __unqual_scalar_typeof(*ptr) VAL; \
>> + unsigned int __count = 0; \
>> + for (;;) { \
>> + VAL = READ_ONCE(*__PTR); \
>> + if (cond_expr) \
>> + break; \
>> + cpu_relax(); \
>> + if (__count++ < smp_cond_time_check_count) \
>> + continue; \
>> + if ((time_expr_ns) >= time_limit_ns) \
>> + break; \
>
> Calling the clock retrieval function repeatedly should be fine and is
> typically done in user space as well as in kernel space for functions that
> need to wait short time periods.
The problem is that you might have multiple CPUs polling in idle
for prolonged periods of time. And, so you want to minimize
your power/thermal envelope.
For instance see commit 4dc2375c1a4e "cpuidle: poll_state: Avoid
invoking local_clock() too often" which originally added a similar
rate limit to poll_idle() where they saw exactly that issue.
--
ankur
Powered by blists - more mailing lists