lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240226175028.GA1903@maniforge>
Date: Mon, 26 Feb 2024 11:50:28 -0600
From: David Vernet <void@...ifault.com>
To: peterz@...radead.org
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	bsegall@...gle.com, bristot@...hat.com, vschneid@...hat.com,
	kernel-team@...a.com
Subject: Re: [PATCH v3 0/3] sched/fair: Simplify and optimize
 update_sd_pick_busiest()

On Fri, Feb 16, 2024 at 01:44:40PM -0600, David Vernet wrote:
> Hello Peter, hello Ingo,
> 
> Friendly ping. Is there anything else required for this to land?

Hello,

Sending another ping.

Thanks,
David

> 
> Thanks,
> David
> 
> > 
> > - In update_sd_lb_stats(), we're using a goto to skip a single if check.
> >   Let's remove the goto and just add another condition to the if.
> > 
> > - In update_sd_pick_busiest(), only update a group_misfit_task group to
> >   be the busiest if it has strictly more load than the current busiest
> >   task, rather than >= the load.
> > 
> > - When comparing the current struct sched_group with the yet-busiest
> >   domain in update_sd_pick_busiest(), if the two groups have the same
> >   group type, we're currently doing a bit of unnecessary work for any
> >   group >= group_misfit_task. We're comparing the two groups, and then
> >   returning only if false (the group in question is not the busiest).
> >   Othewise, we break, do an extra unnecessary conditional check that's
> >   vacuously false for any group type > group_fully_busy, and then always
> >   return true. This patch series has us instead simply return directly
> >   in the switch statement, saving some bytes in load_balance().
> > 
> > ---
> > 
> > v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
> > v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
> > 
> > v2 -> v3:
> > - Add Vincent's Reviewed-by tags
> > - Fix stale commit summary sentence (Vincent)
> > 
> > v1 -> v2 changes:
> > 
> > - Split the patch series into separate patches (Valentin)
> > - Update the group_misfit_task busiest check to use strict inequality
> > 
> > David Vernet (3):
> >   sched/fair: Remove unnecessary goto in update_sd_lb_stats()
> >   sched/fair: Do strict inequality check for busiest misfit task group
> >   sched/fair: Simplify some logic in update_sd_pick_busiest()
> > 
> >  kernel/sched/fair.c | 19 ++++---------------
> >  1 file changed, 4 insertions(+), 15 deletions(-)
> > 
> > -- 
> > 2.43.0
> > 



Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ