[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180907145858.GK24106@hirez.programming.kicks-ass.net>
Date: Fri, 7 Sep 2018 16:58:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Daniel Drake <drake@...lessm.com>,
Vinayak Menon <vinmenon@...eaurora.org>,
Christopher Lameter <cl@...ux.com>,
Peter Enderborg <peter.enderborg@...y.com>,
Shakeel Butt <shakeelb@...gle.com>,
Mike Galbraith <efault@....de>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 8/9] psi: pressure stall information for CPU, memory, and
IO
On Fri, Sep 07, 2018 at 10:44:22AM -0400, Johannes Weiner wrote:
> > This does the whole seqcount thing 6x, which is a bit of a waste.
>
> [...]
>
> > It's a bit cumbersome, but that's because of C.
>
> I was actually debating exactly this with Suren before, but since this
> is a super cold path I went with readability. I was also thinking that
> restarts could happen quite regularly under heavy scheduler load, and
> so keeping the individual retry sections small could be helpful - but
> I didn't instrument this in any way.
I was hoping going over the whole thing once would reduce the time we
need to keep that line in shared mode and reduce traffic. And yes, this
path is cold, but I was thinking about reducing the interference on the
remote CPU.
Alternatively, we memcpy the whole line under the seqlock and then do
everything later.
Also, this only has a single cpu_clock() invocation.
Powered by blists - more mailing lists