[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180821194413.GA24538@cmpxchg.org>
Date: Tue, 21 Aug 2018 15:44:13 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Daniel Drake <drake@...lessm.com>,
Vinayak Menon <vinmenon@...eaurora.org>,
Christopher Lameter <cl@...ux.com>,
Mike Galbraith <efault@....de>,
Shakeel Butt <shakeelb@...gle.com>,
Peter Enderborg <peter.enderborg@...y.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 8/9] psi: pressure stall information for CPU, memory, and
IO
Hi,
a quick update on that feedback before I send out v4:
On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state)
> > +{
> > + switch (state) {
> > + case PSI_IO_SOME:
> > + return tasks[NR_IOWAIT];
> > + case PSI_IO_FULL:
> > + return tasks[NR_IOWAIT] && !tasks[NR_RUNNING];
> > + case PSI_MEM_SOME:
> > + return tasks[NR_MEMSTALL];
> > + case PSI_MEM_FULL:
> > + /*
> > + * Since we care about lost potential, things are
> > + * fully blocked on memory when there are no other
> > + * working tasks, but also when the CPU is actively
> > + * being used by a reclaimer and nothing productive
> > + * could run even if it were runnable.
> > + */
> > + return tasks[NR_MEMSTALL] &&
> > + (!tasks[NR_RUNNING] ||
> > + cpu_curr(cpu)->flags & PF_MEMSTALL);
>
> I don't think you can do this, there is nothing that guarantees
> cpu_curr() still exists.
As discussed later in this thread, I've replaced this with time
sampling from inside scheduler_tick(): in the unlikely event that
rq->curr is PF_MEMSTALL, it'll record TICK_NSEC worth of MEM_FULL.
However:
> > + for (s = PSI_NONIDLE; s >= 0; s--) {
> > + u32 time, delta;
> > +
> > + time = READ_ONCE(groupc->times[s]);
> > + /*
> > + * In addition to already concluded states, we
> > + * also incorporate currently active states on
> > + * the CPU, since states may last for many
> > + * sampling periods.
> > + *
> > + * This way we keep our delta sampling buckets
> > + * small (u32) and our reported pressure close
> > + * to what's actually happening.
> > + */
> > + if (test_state(groupc->tasks, cpu, s)) {
> > + /*
> > + * We can race with a state change and
> > + * need to make sure the state_start
> > + * update is ordered against the
> > + * updates to the live state and the
> > + * time buckets (groupc->times).
> > + *
> > + * 1. If we observe task state that
> > + * needs to be recorded, make sure we
> > + * see state_start from when that
> > + * state went into effect or we'll
> > + * count time from the previous state.
> > + *
> > + * 2. If the time delta has already
> > + * been added to the bucket, make sure
> > + * we don't see it in state_start or
> > + * we'll count it twice.
> > + *
> > + * If the time delta is out of
> > + * state_start but not in the time
> > + * bucket yet, we'll miss it entirely
> > + * and handle it in the next period.
> > + */
> > + smp_rmb();
> > + time += cpu_clock(cpu) - groupc->state_start;
> > + }
>
> The alternative is adding an update to scheduler_tick(), that would
> ensure you're never more than nr_cpu_ids * TICK_NSEC behind.
I wasn't able to convert *all* states to tick updates like this.
The reason is that, while testing rq->curr for PF_MEMSTALL is cheap,
other tasks associated with the rq could be from any cgroup in the
system. That means we'd have to do for_each_cgroup() on every tick to
keep the groupc->times that closely uptodate, and that wouldn't scale.
We tend to have hundreds of them, some setups have thousands.
Since we don't need to be *that* current, I left the on-demand update
inside the aggregator for now. It's a bit trickier, but much cheaper.
Powered by blists - more mailing lists