[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZVyxMrisyuBtQ+2Y@yury-ThinkPad>
Date: Tue, 21 Nov 2023 05:31:30 -0800
From: Yury Norov <yury.norov@...il.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Jan Kara <jack@...e.cz>,
Mirsad Todorovac <mirsad.todorovac@....unizg.hr>,
Matthew Wilcox <willy@...radead.org>,
Maxim Kuvyrkov <maxim.kuvyrkov@...aro.org>,
Alexey Klimov <klimov.linux@...il.com>
Subject: Re: [PATCH 04/34] sched: add cpumask_find_and_set() and use it in
__mm_cid_get()
On Mon, Nov 20, 2023 at 11:17:32AM -0500, Mathieu Desnoyers wrote:
...
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index 2e5a95486a42..b2f095a9fc40 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -3345,28 +3345,6 @@ static inline void mm_cid_put(struct mm_struct *mm)
> > > __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
> > > }
> > > -static inline int __mm_cid_try_get(struct mm_struct *mm)
> > > -{
> > > - struct cpumask *cpumask;
> > > - int cid;
> > > -
> > > - cpumask = mm_cidmask(mm);
> > > - /*
> > > - * Retry finding first zero bit if the mask is temporarily
> > > - * filled. This only happens during concurrent remote-clear
> > > - * which owns a cid without holding a rq lock.
> > > - */
> > > - for (;;) {
> > > - cid = cpumask_first_zero(cpumask);
> > > - if (cid < nr_cpu_ids)
> > > - break;
> > > - cpu_relax();
> > > - }
> > > - if (cpumask_test_and_set_cpu(cid, cpumask))
> > > - return -1;
>
> This was split in find / test_and_set on purpose because following
> patches I have (implementing numa-aware mm_cid) have a scan which
> needs to scan sets of two cpumasks in parallel (with "and" and
> and_not" operators).
>
> Moreover, the "mask full" scenario only happens while a concurrent
> remote-clear temporarily owns a cid without rq lock. See
> sched_mm_cid_remote_clear():
>
> /*
> * The cid is unused, so it can be unset.
> * Disable interrupts to keep the window of cid ownership without rq
> * lock small.
> */
> local_irq_save(flags);
> if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
> __mm_cid_put(mm, cid);
> local_irq_restore(flags);
>
> The proposed patch here turns this scenario into something heavier
> (setting the use_cid_lock) rather than just retrying. I guess the
> question to ask here is whether it is theoretically possible to cause
> __mm_cid_try_get() to fail to have forward progress if we have a high
> rate of sched_mm_cid_remote_clear. If we decide that this is indeed
> a possible progress-failure scenario, then it makes sense to fallback
> to use_cid_lock as soon as a full mask is encountered.
>
> However, removing the __mm_cid_try_get() helper will make it harder to
> integrate the following numa-awareness patches I have on top.
>
> I am not against using cpumask_find_and_set, but can we keep the
> __mm_cid_try_get() helper to facilitate integration of future work ?
> We just have to make it use cpumask_find_and_set, which should be
> easy.
Sure, I can. Can you point me to the work you mention here?
Powered by blists - more mailing lists