lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Jun 2023 15:52:17 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Aaron Lu <aaron.lu@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>
CC:     Deng Pan <pan.deng@...el.com>, <tim.c.chen@...el.com>,
        <peterz@...radead.org>, <vincent.guittot@...aro.org>,
        <linux-kernel@...r.kernel.org>, <tianyou.li@...el.com>,
        <yu.ma@...el.com>, <lipeng.zhu@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [PATCH v2] sched/task_group: Re-layout structure to reduce false
 sharing

On 2023-06-26 at 13:47:56 +0800, Aaron Lu wrote:
> On Wed, Jun 21, 2023 at 04:14:25PM +0800, Deng Pan wrote:
> > When running UnixBench/Pipe-based Context Switching case, we observed
> > high false sharing for accessing ‘load_avg’ against rt_se and rt_rq
> > when config CONFIG_RT_GROUP_SCHED is turned on.
> > 
> > Pipe-based Context Switching case is a typical sleep/wakeup scenario,
> > in which load_avg is frequenly loaded and stored, at the meantime, rt_se
> > and rt_rq are frequently loaded. Unfortunately, they are in the same
> > cacheline.
> > 
> > This change re-layouts the structure:
> > 1. Move rt_se and rt_rq to a 2nd cacheline.
> > 2. Keep ‘parent’ field in the 2nd cacheline since it's also accessed
> > very often when cgroups are nested, thanks Tim Chen for providing the
> > insight.
> > 
> > Tested on Intel Icelake 2 sockets 80C/160T platform, based on v6.4-rc5.
> > 
> > With this change, Pipe-based Context Switching 160 parallel score is
> > improved ~9.6%, perf record tool reports rt_se and rt_rq access cycles
> > are reduced from ~14.5% to ~0.3%, perf c2c tool shows the false-sharing
> > is resolved as expected:
> 
> I also give it a run on an Icelake and saw similar things when
> CONFIG_RT_GROUP_SCHED is on.
> 
> For hackbench/pipe/thread, set_task_cpu() dropped from 1.67% to 0.51%
> according to perf cycle; for netperf/nr_client=nr_cpu/UDP_RR,
> set_task_cpu() dropped from 5.06% to 1.08%.
> 
> The patch looks good to me, just a nit below.
>
I also saw overall throughput improvements of netperf on Sapphire Rapids
with CONFIG_RT_GROUP_SCHED set, as this platform suffers from C2C so
this patch helps a lot.

netperf
=======
case                    load            baseline(std%)  compare%( std%)
TCP_RR                  56-threads       1.00 (  1.61)   +2.20 (  1.39)
TCP_RR                  112-threads      1.00 (  2.71)   -0.75 (  2.29)
TCP_RR                  168-threads      1.00 (  4.39)  -14.26 (  4.99)
TCP_RR                  224-threads      1.00 (  4.21)   -5.52 (  5.07)
TCP_RR                  280-threads      1.00 (  1.89)  +246.41 ( 61.31)
TCP_RR                  336-threads      1.00 ( 53.49)  +164.89 ( 21.45)
TCP_RR                  392-threads      1.00 ( 42.46)  +162.16 ( 31.33)
TCP_RR                  448-threads      1.00 ( 44.61)  +113.64 ( 41.74)
UDP_RR                  56-threads       1.00 (  3.63)   -1.27 (  3.73)
UDP_RR                  112-threads      1.00 (  7.83)   -4.16 ( 16.57)
UDP_RR                  168-threads      1.00 ( 18.08)  -16.54 ( 17.27)
UDP_RR                  224-threads      1.00 ( 12.60)   -5.77 ( 12.79)
UDP_RR                  280-threads      1.00 (  9.37)   -0.57 ( 15.75)
UDP_RR                  336-threads      1.00 ( 14.87)  +200.81 ( 34.90)
UDP_RR                  392-threads      1.00 ( 38.85)  -10.15 ( 46.04)
UDP_RR                  448-threads      1.00 ( 35.06)   -8.93 ( 55.56)

> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index ec7b3e0a2b20..4fbd4b3a4bdd 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -389,6 +389,19 @@ struct task_group {
> >  #endif
> >  #endif
> >  
> > +	struct rcu_head		rcu;
> > +	struct list_head	list;
> > +
> > +	struct list_head	siblings;
> > +	struct list_head	children;
> > +
> > +	/*
> > +	 * To reduce false sharing, current layout is optimized to make
> > +	 * sure load_avg is in a different cacheline from parent, rt_se
> > +	 * and rt_rq.
> > +	 */
> > +	struct task_group	*parent;
> > +
> 
> I wonder if we can simply do:
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index ec7b3e0a2b20..31b73e8d9568 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -385,7 +385,9 @@ struct task_group {
>  	 * it in its own cacheline separated from the fields above which
>  	 * will also be accessed at each tick.
>  	 */
> -	atomic_long_t		load_avg ____cacheline_aligned;
> +	struct {
> +		atomic_long_t		load_avg;
> +	} ____cacheline_aligned_in_smp;
>  #endif
>  #endif
> 
> This way it can make sure there is no false sharing with load_avg no
> matter how the layout of this structure changes in the future.
> 
> Your patch has the advantage of not adding any more padding, thus saves
> some space; the example code above has the advantage of no need to worry
> about future changes that might break the expected alignment, but it
> does make the structure size a little larger(704 -> 768).
>
Looks reasonable that we don't need to adjust the layout in the future.
Besides the cache line alignment, if the task is not a rt one,
why do we have to touch that, I wonder if the following change can avoid that:

thanks,
Chenyu
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ec7b3e0a2b20..067f1310bad2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1958,8 +1958,10 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
 #endif
 
 #ifdef CONFIG_RT_GROUP_SCHED
-	p->rt.rt_rq  = tg->rt_rq[cpu];
-	p->rt.parent = tg->rt_se[cpu];
+	if (p->sched_class = &rt_sched_class) {
+		p->rt.rt_rq  = tg->rt_rq[cpu];
+		p->rt.parent = tg->rt_se[cpu];
+	}
 #endif
 }
 
-- 
2.25.1

> Thanks,
> Aaron
> 
> >  #ifdef CONFIG_RT_GROUP_SCHED
> >  	struct sched_rt_entity	**rt_se;
> >  	struct rt_rq		**rt_rq;
> > @@ -396,13 +409,6 @@ struct task_group {
> >  	struct rt_bandwidth	rt_bandwidth;
> >  #endif
> >  
> > -	struct rcu_head		rcu;
> > -	struct list_head	list;
> > -
> > -	struct task_group	*parent;
> > -	struct list_head	siblings;
> > -	struct list_head	children;
> > -
> >  #ifdef CONFIG_SCHED_AUTOGROUP
> >  	struct autogroup	*autogroup;
> >  #endif
> > -- 
> > 2.39.3
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ