[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1907050024270.1802@nanos.tec.linutronix.de>
Date: Fri, 5 Jul 2019 00:33:23 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
cc: linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>,
Nadav Amit <namit@...are.com>
Subject: Re: [PATCH] cpu/hotplug: Cache number of online CPUs
On Thu, 4 Jul 2019, Mathieu Desnoyers wrote:
> ----- On Jul 4, 2019, at 5:10 PM, Thomas Gleixner tglx@...utronix.de wrote:
> >
> > num_online_cpus() is racy today vs. CPU hotplug operations as
> > long as you don't hold the hotplug lock.
>
> Fair point, AFAIU none of the loads performed within num_online_cpus()
> seem to rely on atomic nor volatile accesses. So not using a volatile
> access to load the cached value should not introduce any regression.
>
> I'm concerned that some code may rely on re-fetching of the cached
> value between iterations of a loop. The lack of READ_ONCE() would
> let the compiler keep a lifted load within a register and never
> re-fetch, unless there is a cpu_relax() or a barrier() within the
> loop.
If someone really wants to write code which can handle concurrent CPU
hotplug operations and rely on that information, then it's probably better
to write out:
ncpus = READ_ONCE(__num_online_cpus);
explicitely along with a big fat comment.
I can't figure out why one wants to do that and how it is supposed to work,
but my brain is in shutdown mode already :)
I'd rather write a proper kernel doc comment for num_online_cpus() which
explains what the constraints are instead of pretending that the READ_ONCE
in the inline has any meaning.
Thanks,
tglx
Powered by blists - more mailing lists