[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200713165449.GM10769@hirez.programming.kicks-ass.net>
Date: Mon, 13 Jul 2020 18:54:49 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Doug Anderson <dianders@...omium.org>,
Jonathan Corbet <corbet@....net>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Quentin Perret <qperret@...gle.com>,
Valentin Schneider <valentin.schneider@....com>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Pavan Kondeti <pkondeti@...eaurora.org>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v6 1/2] sched/uclamp: Add a new sysctl to control RT
default boost value
On Mon, Jul 13, 2020 at 03:27:55PM +0100, Qais Yousef wrote:
> On 07/13/20 15:35, Peter Zijlstra wrote:
> > > I protect this with rcu_read_lock() which as far as I know synchronize_rcu()
> > > will ensure if we do the update during this section; we'll wait for it to
> > > finish. New forkees entering the rcu_read_lock() section will be okay because
> > > they should see the new value.
> > >
> > > spinlocks() and mutexes seemed inferior to this approach.
> >
> > Well, didn't we just write in another patch that p->uclamp_* was
> > protected by both rq->lock and p->pi_lock?
>
> __setscheduler_uclamp() path is holding these locks, not sure by design or it
> just happened this path holds the lock. I can't see the lock in the
> uclamp_fork() path. But it's hard sometimes to unfold the layers of callers,
> especially not all call sites are annotated for which lock is assumed to be
> held.
>
> Is it safe to hold the locks in uclamp_fork() while the task is still being
> created? My new code doesn't hold it of course.
>
> We can enforce this rule if you like. Though rcu critical section seems lighter
> weight to me.
>
> If all of this does indeed start looking messy we can put the update in
> a delayed worker and schedule that instead of doing synchronous setup.
sched_fork() doesn't need the locks, because at that point the task
isn't visible yet. HOWEVER, sched_post_fork() is after pid-hash (per
design) and thus the task is visible, so we can race against
sched_setattr(), so we'd better hold those locks anyway.
Powered by blists - more mailing lists