[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250627004851.GB222768@pauld.westford.csb>
Date: Thu, 26 Jun 2025 20:48:51 -0400
From: Phil Auld <pauld@...hat.com>
To: Waiman Long <llong@...hat.com>
Cc: Frederic Weisbecker <frederic@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...hat.com>,
Marco Crivellari <marco.crivellari@...e.com>,
Michal Hocko <mhocko@...e.com>,
Peter Zijlstra <peterz@...radead.org>, Tejun Heo <tj@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH 02/27] sched/isolation: Introduce housekeeping per-cpu
rwsem
On Thu, Jun 26, 2025 at 08:11:54PM -0400 Waiman Long wrote:
> On 6/25/25 11:50 AM, Phil Auld wrote:
> > On Wed, Jun 25, 2025 at 04:34:18PM +0200 Frederic Weisbecker wrote:
> > > Le Wed, Jun 25, 2025 at 08:18:50AM -0400, Phil Auld a écrit :
> > > > Hi Waiman,
> > > >
> > > > On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote:
> > > > > On 6/20/25 11:22 AM, Frederic Weisbecker wrote:
> > > > > > The HK_TYPE_DOMAIN isolation cpumask, and further the
> > > > > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the
> > > > > > future.
> > > > > >
> > > > > > The affected subsystems will need to synchronize against those cpumask
> > > > > > changes so that:
> > > > > >
> > > > > > * The reader get a coherent snapshot
> > > > > > * The housekeeping subsystem can safely propagate a cpumask update to
> > > > > > the susbsytems after it has been published.
> > > > > >
> > > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are
> > > > > > expected to be very rare given that CPU isolation is a niche usecase and
> > > > > > related cpuset setup happen only in preparation work. On the other hand
> > > > > > read sides can occur in more frequent paths.
> > > > > >
> > > > > > Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> > > > > Thanks for the patch series and it certainly has some good ideas. However I
> > > > > am a bit concern about the overhead of using percpu-rwsem for
> > > > > synchronization especially when the readers have to wait for the completion
> > > > > on the writer side. From my point of view, during the transition period when
> > > > > new isolated CPUs are being added or old ones being removed, the reader will
> > > > > either get the old CPU data or the new one depending on the exact timing.
> > > > > The effect the CPU selection may persist for a while after the end of the
> > > > > critical section.
> > > > >
> > > > > Can we just rely on RCU to make sure that it either get the new one or the
> > > > > old one but nothing in between without the additional overhead?
> > > > >
> > > > > My current thinking is to make use CPU hotplug to enable better CPU
> > > > > isolation. IOW, I would shut down the affected CPUs, change the housekeeping
> > > > > masks and then bring them back online again. That means the writer side will
> > > > > take a while to complete.
> > > > The problem with this approach is that offlining a cpu effects all the other
> > > > cpus and causes latency spikes on other low latency tasks which may already be
> > > > running on other parts of the system.
> > > >
> > > > I just don't want us to finally get to dynamic isolation and have it not
> > > > usable for the usecases asking for it.
> > > We'll have to discuss that eventually because that's the plan for nohz_full.
> > > We can work around the stop machine rendez-vous on nohz_full if that's the
> > > problem. If the issue is not to interrupt common RT-tasks, then that's a
> > > different problem for which I don't have a solution.
> > >
> > My understanding is that it's the stop machine issue. If you have a way
> > around that then great!
>
> My current thinking is to just run a selected set of CPUHP teardown and
> startup methods relevant to housekeeping cpumasks usage without calling the
> full set from CPUHP_ONLINE to CPUHP_OFFLINE. I don't know if it is possible
> or not or how much additional changes will be needed to make that possible.
> That will skip the CPUHP_TEARDOWN_CPU teardown method that is likely the
> cause of most the latency spike experienced by other CPUs.
>
Yes, CPUHP_TEARDOWN_CPU is the source of the stop_machine I believe.
It'll be interesting to see if you can safely use the cpuhp machinery
selectively like that :)
Cheers,
Phil
> Cheers,
> Longman
>
--
Powered by blists - more mailing lists