lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0c4d7bbb-3fef-031e-e9a1-a678ab68ade7@linux.vnet.ibm.com>
Date:   Wed, 22 Feb 2023 00:23:05 +0530
From:   shrikanth hegde <sshegde@...ux.vnet.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, Vincent Guittot <vincent.guittot@...aro.org>,
        dietmar.eggemann@....com, bsegall@...gle.com,
        Thomas Gleixner <tglx@...utronix.de>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Arjan van de Ven <arjan@...ux.intel.com>,
        svaidy@...ux.ibm.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Interleave cfs bandwidth timers for
 improved single thread performance at low utilization



On 2/20/23 11:08 PM, Peter Zijlstra wrote:
> On Tue, Feb 14, 2023 at 08:54:09PM +0530, shrikanth hegde wrote:
> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index ff4dbbae3b10..7b69c329e05d 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5939,14 +5939,25 @@ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
>>
>>  void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
>>  {
>> -	lockdep_assert_held(&cfs_b->lock);
>> +	struct hrtimer *period_timer = &cfs_b->period_timer;
>> +	s64 incr = ktime_to_ns(cfs_b->period) / 10;
>> +	ktime_t delta;
>> +	u64 orun = 1;
>>
>> +	lockdep_assert_held(&cfs_b->lock);
>>  	if (cfs_b->period_active)
>>  		return;
>>
>>  	cfs_b->period_active = 1;
>> -	hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
>> -	hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);
>> +	delta = ktime_sub(period_timer->base->get_time(),
>> +			hrtimer_get_expires(period_timer));
>> +	if (unlikely(delta >= cfs_b->period)) {
>> +		orun = ktime_divns(delta, incr);
>> +		hrtimer_add_expires_ns(period_timer, incr * orun);
>> +	}
>> +
>> +	hrtimer_forward_now(period_timer, cfs_b->period);
>> +	hrtimer_start_expires(period_timer, HRTIMER_MODE_ABS_PINNED);
>>  }
> 
> What kind of mad hackery is this? Why can't you do the sane thing and
> initialize the timer at !0 in init_cfs_bandwidth(), then any of the
> forwards will stay in period -- as they should.
> 
> Please, go re-read Thomas's email.

Thank you Peter for taking a look and review.
we can scrap this implementation and switch to the one you suggested.
All we need is to initialize the offset. 

Only reason was the way i had implemented. This implementation couldn't be
fit into init_cfs_bandwidth as timers would align if the cgroups are 
created together and continue to align forever. 

> 
> *completely* untested...
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7c46485d65d7..4d6ea76096dc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5915,6 +5915,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
> 
>  	INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
>  	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
> +	cfs_b->period_timer.node.expires = get_random_u32_below(cfs_b->period);

This approach/implementation is better as the random function provides uniform  
distribution. Had to modify this a bit to make it work.  Along with setting     
setting node.expires, we need to set _softexpires as well. Which is what        
hrtimer_set_expires does.

Here are the similar numbers again.
8 Core system with SMT=8. Total of 64 CPU                                       
Workload: stress-ng --cpu=32 --cpu-ops=50000                                    
                                                                                
           6.2-rc6                     |   with patch                           
8Core   1CG    power    2CG     power  |  1CG    power  2CG    power           
        27.5    80.6    40      90     |  27.3    82    32.3    104             
        27.5    81      40.2    91     |  27.5    81    38.7     96             
        27.7    80      40.1    89     |  27.6    80    29.7    115             
        27.7    80.1    40.3    94     |  27.6    80    31.5    105   

Will collect some more benchmarks numbers w.r.t to performance.


>  	cfs_b->period_timer.function = sched_cfs_period_timer;
>  	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
>  	cfs_b->slack_timer.function = sched_cfs_slack_timer;

This below patch worked. 
Does the below patch look okay? shall i send the [PATCH V1] with this change? 

Question. 
Should we skip this adding the offset for root_task_group? 


---
 kernel/sched/fair.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ff4dbbae3b10..6448533178af 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5923,6 +5923,9 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 	INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
 	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
 	cfs_b->period_timer.function = sched_cfs_period_timer;
+	/* Add a random offset so that timers interleave */
+	hrtimer_set_expires(&cfs_b->period_timer, get_random_u32_below(cfs_b->period));
+
 	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
 	cfs_b->slack_timer.function = sched_cfs_slack_timer;
 	cfs_b->slack_started = false;
-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ