lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Aug 2009 14:32:53 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Yinghai Lu <yinghai@...nel.org>, mingo@...hat.com, hpa@...or.com,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	jes@....com, jens.axboe@...cle.com, tglx@...utronix.de,
	mingo@...e.hu, Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Arjan van de Ven <arjan@...radead.org>,
	linux-tip-commits@...r.kernel.org
Subject: Re: [PATCH] sched: Avoid division by zero - really

On Thu, 2009-08-27 at 14:19 +0200, Eric Dumazet wrote:
> Peter Zijlstra a écrit :
> > When re-computing the shares for each task group's cpu representation we
> > need the ratio of weight on each cpu vs the total weight of the sched
> > domain.
> > 
> > Since load-balancing is loosely (read not) synchronized, the weight of
> > individual cpus can change between doing the sum and calculating the
> > ratio.
> > 
> > The previous patch dealt with only one of the race scenarios, this patch
> > side steps them all by saving a snapshot of all the individual cpu
> > weights, thereby always working on a consistent set.
> > 
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> > ---
> >  kernel/sched.c |   50 +++++++++++++++++++++++++++++---------------------
> >  1 files changed, 29 insertions(+), 21 deletions(-)
> > 
> > diff --git a/kernel/sched.c b/kernel/sched.c
> > index 0e76b17..4591054 100644
> > --- a/kernel/sched.c
> > +++ b/kernel/sched.c
> > @@ -1515,30 +1515,29 @@ static unsigned long cpu_avg_load_per_task(int cpu)
> >  
> >  #ifdef CONFIG_FAIR_GROUP_SCHED
> >  
> > +struct update_shares_data {
> > +	unsigned long rq_weight[NR_CPUS];
> > +};
> > +
> > +static DEFINE_PER_CPU(struct update_shares_data, update_shares_data);
> 
> ouch... thats quite large IMHO, up to 4096*8 = 32768 bytes per cpu...
> 
> Now we have nice dynamic per cpu allocations, we could use it here,
> and use nr_cpus instead of NR_CPUS as the array size ?

Possibly, but I guess that should include stuff like
static_sched_{domain,group} too, since they seem to have the same
problem.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ