lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fb53053.b004.1927b68f2f8.Coremail.xavier_qy@163.com>
Date: Fri, 11 Oct 2024 19:48:48 +0800 (CST)
From: Xavier  <xavier_qy@....com>
To: "Peter Zijlstra" <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org, 
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, 
	mgorman@...e.de, vschneid@...hat.com, yu.c.chen@...el.com, 
	linux-kernel@...r.kernel.org
Subject: Re:Re: [PATCH v2] sched/eevdf: Reduce the computation frequency of
 avg_vruntime





At 2024-10-11 16:52:01, "Peter Zijlstra" <peterz@...radead.org> wrote:
>On Fri, Oct 11, 2024 at 02:24:49PM +0800, Xavier wrote:
>> The current code subtracts the value of curr from avg_vruntime and avg_load
>> during runtime. Then, every time avg_vruntime() is called, it adds the
>> value of curr to the avg_vruntime and avg_load. Afterward, it divides these
>> and adds min_vruntime to obtain the actual avg_vruntime.
>> 
>> Analysis of the code indicates that avg_vruntime only changes significantly
>> during update_curr(), update_min_vruntime(), and when tasks are enqueued or
>> dequeued. Therefore, it is sufficient to recalculate and store avg_vruntime
>> only in these specific scenarios. This optimization ensures that accessing
>> avg_vruntime() does not necessitate a recalculation each time, thereby
>> enhancing the efficiency of the code.
>> 
>> There is no need to subtract curr’s load from avg_load during runtime.
>> Instead, we only need to calculate the incremental change and update
>> avg_vruntime whenever curr’s time is updated.
>> 
>> To better represent their functions, rename the original avg_vruntime and
>> avg_load to tot_vruntime and tot_load, respectively, which more accurately
>> describes their roles in the computation.
>> 
>> Signed-off-by: Xavier <xavier_qy@....com>
>

>This makes the code more complicated for no shown benefit.




Hi Peter,

Thank you for reviewing this patch. I would like to address your questions as follows:

Code Complexity vs. Understandability: I agree that this modification adds some
 complexity to the code, but the method of calculation is more straightforward.
 This patch maintains consistency in how avg_vruntime is added or subtracted
 relative to load. Specifically, the enqueue and dequeue operations of tasks directly
 impact the avg_vruntime of cfs_rq, which seems logical.

Efficiency Improvements: This approach minimizes unnecessary calculations,
 thereby enhancing execution efficiency. I understand that entity_eligible() and
 vruntime_eligible() are high-frequency operations. The existing code recalculates
 curr->vruntime added to cfs_rq->avg_vruntime for each eligibility check.
 If many tasks in the cfs_rq do not meet the conditions, it leads to multiple
 redundant calculations within pick_eevdf(). This patch resolves this issue by
 computing cfs_rq->tot_vruntime only when an update is necessary, allowing
 vruntime_eligible() to utilize the precomputed value directly.

Reducing avg_vruntime Calculations: This patch also reduces the frequency of
 avg_vruntime evaluations. The original code calls avg_vruntime() every time it's
 needed, despite many of those calls being redundant when curr->vruntime
 hasn't changed. This patch ensures that cfs_rq->avg_vruntime is updated only
 when curr->vruntime or cfs_rq->tot_vruntime changes, allowing subsequent
 calls to directly access the current value. This greatly decreases the frequency
 of avg_vruntime calculations.

I hope this explanation clarifies the benefits of the patch.
 I welcome any comments or suggestions. Thank you!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ