lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 04 Nov 2010 15:37:54 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Bjorn Helgaas <bjorn.helgaas@...com>
Cc:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Venkatesh Pallipadi <venki@...gle.com>,
	Nikhil Rao <ncrao@...gle.com>,
	Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>,
	linux-kernel@...r.kernel.org
Subject: Re: divide error in select_task_rq_fair()

Le jeudi 04 novembre 2010 à 08:28 -0600, Bjorn Helgaas a écrit :
> On Thu, Nov 04, 2010 at 06:19:52AM +0100, Eric Dumazet wrote:
> > Le mercredi 03 novembre 2010 à 22:12 -0600, Bjorn Helgaas a écrit :
> > > Hi,
> > > 
> > > With current upstream, I see the following crash at boot-time:
> > > 
> > >     Brought up 64 CPUs
> > >     Total of 64 processors activated (289366.52 BogoMIPS).
> > >     divide error: 0000 [#1] SMP 
> > >     last sysfs file: 
> > >     CPU 1 
> > >     Modules linked in:
> > > 
> > >     Pid: 2, comm: kthreadd Not tainted 2.6.37-rc1-00027-gff8b16d #271 /ProLiant DL980 G7
> > >     RIP: 0010:[<ffffffff81034645>]  [<ffffffff81034645>] select_task_rq_fair+0x62a/0x7a0
> > > 
> > > Complete dmesg below; let me know if you need more info.
> > 
> > Is the machine runs OK if you build a kernel with NR_CPUS=128 ?
> 
> Nope, it fails the same way with NR_CPUS=128.  Dmesg below.

Sorry, just try 256 or 512, it seems you have a pretty big machine ?

8 nodes, but only nodes from 0-3 are populated.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ