[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbDhPhuGK-Hd1SCN=5fx1ZEFXQnoubncvjwHw=+MHOBDPA@mail.gmail.com>
Date: Fri, 25 Apr 2025 10:29:20 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>, linux-kernel <linux-kernel@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>, Sean Christopherson <seanjc@...gle.com>,
Josh Don <joshdon@...gle.com>
Subject: Re: [PATCH] sched/fair: reduce false sharing on sched_balance_running
On Thu, Apr 24, 2025 at 11:50 PM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Thu, Apr 24, 2025 at 7:46 AM Yafang Shao <laoar.shao@...il.com> wrote:
> >
> > On Thu, Apr 24, 2025 at 1:46 AM Eric Dumazet <edumazet@...gle.com> wrote:
> > >
> > > rebalance_domains() can attempt to change sched_balance_running
> > > more than 350,000 times per second on our servers.
> > >
> > > If sched_clock_irqtime and sched_balance_running share the
> > > same cache line, we see a very high cost on hosts with 480 threads
> > > dealing with many interrupts.
> >
> > CONFIG_IRQ_TIME_ACCOUNTING is enabled on your systems, right?
> > Have you observed any impact on task CPU utilization measurements due
> > to this configuration? [0]
> >
> > If cache misses on sched_clock_irqtime are indeed the bottleneck, why
> > not align it to improve performance?
>
> "Align it" meaning what exactly ?
Such as :
static __cacheline_aligned_in_smp int sched_clock_irqtime;
> Once sched_clock_irqtime is in a
> read-mostly location everything is fine.
>
> The main bottleneck is the false sharing on these Intel 6980P cpus...
>
> On a dual socket system, this false sharing is using something like 4%
> of the total memory bandwidth,
> and causes apparent high costs on other parts of the kernel.
>
> >
> > [0]. https://lore.kernel.org/all/20250103022409.2544-1-laoar.shao@gmail.com/
>
> What part should I look at, and how is this related to my patch ?
Unrelated to your patch. Please ignore it if you haven't seen this issue.
--
Regards
Yafang
Powered by blists - more mailing lists