[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230412120736.GD628377@hirez.programming.kicks-ass.net>
Date: Wed, 12 Apr 2023 14:07:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Daniel Jordan <daniel.m.jordan@...cle.com>
Cc: Aaron Lu <aaron.lu@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Tim Chen <tim.c.chen@...el.com>,
Nitin Tekchandani <nitin.tekchandani@...el.com>,
Waiman Long <longman@...hat.com>,
Yu Chen <yu.c.chen@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Make tg->load_avg per node
On Thu, Mar 30, 2023 at 01:45:57PM -0400, Daniel Jordan wrote:
> The topology of my machine is different from yours, but it's the biggest
> I have, and I'm assuming cpu count is more important than topology when
> reproducing the remote accesses. I also tried on
Core count definitely matters some, but the thing that really hurts is
the cross-node (and cross-cache, which for intel happens to be the same
set) atomics.
I suppose the thing to measure is where this cost rises most sharply on
the AMD platforms -- is that cross LLC or cross Node?
I mean, setting up the split at boot time is fairly straight forward and
we could equally well split at LLC.
Powered by blists - more mailing lists