lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 22 Jun 2016 19:53:29 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Joonsoo Kim <iamjoonsoo.kim@....com>
Cc:	Geert Uytterhoeven <geert@...ux-m68k.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux.com>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Jesper Dangaard Brouer <brouer@...hat.com>,
	Linux MM <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	linux-renesas-soc@...r.kernel.org,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>
Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab:
 lockless decision to grow cache)

On Wed, Jun 22, 2016 at 07:47:42PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
> > On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> > > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> > > > > > Could you try below patch to check who causes the hang?
> > > > > >
> > > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > > > used it before but Paul could find some culprit on it. :)
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > >
> > > > > > ----->8-----
> > > > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > > > index 763096a..9652d38 100644
> > > > > > --- a/mm/slab.c
> > > > > > +++ b/mm/slab.c
> > > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > > > >          * freed after synchronize_sched().
> > > > > >          */
> > > > > > -       if (force_change)
> > > > > > +       if (force_change) {
> > > > > > +               if (num_online_cpus() > 1)
> > > > > > +                       dump_stack();
> > > > > >                 synchronize_sched();
> > > > > > +               if (num_online_cpus() > 1)
> > > > > > +                       dump_stack();
> > > > > > +       }
> > > > > 
> > > > > I've only added the first one, as I would never see the second one. All of
> > > > > this happens before the serial console is activated, earlycon is not supported,
> > > > > and I only have remote access.
> > > > > 
> > > > > Brought up 2 CPUs
> > > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > > > CPU: All CPU(s) started in SVC mode.
> > > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > > > (__do_tune_cpucache+0xf4/0x114)
> > > > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > > > (__kmem_cache_create+0x1a8/0x1d0)
> > > > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > > > (kmem_cache_create+0xbc/0x190)
> > > > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > > > devtmpfs: initialized
> > > > 
> > > > I don't see anything here that would prevent grace periods from completing.
> > > > 
> > > > The CPUs are using the normal hotplug sequence to come online, correct?
> > > 
> > > And either way, could you please apply the patch below and then
> > > invoke rcu_dump_rcu_sched_tree() just before the offending call to
> > > synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> > > and perhaps also which CPU is holding it up.
> > 
> > I can't find rcu_dump_rcu_sched_tree(). Do you mean
> > rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
> > one which does what Paul want, maybe.
> 
> One of those days, I guess!  :-/
> 
> Your patch is exactly what I intended to send, thank you!

Ah, but your telepathy was not sufficient to intuit the additional
information I need.  Please see the patch at the end.  Your hunk
in mm/slab.c is needed on top of my patch.

So I am clearly having difficulties reading as well as including patches
today...

							Thanx, Paul

> > Thanks.
> > 
> > ------->8---------
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 88d3f95..6b650f0 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -4171,7 +4171,7 @@ static void __init rcu_init_geometry(void)
> >   * Dump out the structure of the rcu_node combining tree associated
> >   * with the rcu_state structure referenced by rsp.
> >   */
> > -static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
> > +static void rcu_dump_rcu_node_tree(struct rcu_state *rsp)
> >  {
> >         int level = 0;
> >         struct rcu_node *rnp;
> > @@ -4189,6 +4189,11 @@ static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
> >         pr_cont("\n");
> >  }
> > 
> > +void rcu_dump_rcu_sched_tree(void)
> > +{
> > +       rcu_dump_rcu_node_tree(&rcu_sched_state);
> > +}
> > +
> >  void __init rcu_init(void)
> >  {
> >         int cpu;
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 763096a..d88976c 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -909,6 +909,8 @@ static int init_cache_node_node(int node)
> >         return 0;
> >  }
> > 
> > +extern void rcu_dump_rcu_sched_tree(void);
> > +
> >  static int setup_kmem_cache_node(struct kmem_cache *cachep,
> >                                 int node, gfp_t gfp, bool force_change)
> >  {
> > @@ -964,8 +966,10 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> >          * guaranteed to be valid until irq is re-enabled, because it will be
> >          * freed after synchronize_sched().
> >          */
> > -       if (force_change)
> > +       if (force_change) {
> > +               rcu_dump_rcu_sched_tree();
> >                 synchronize_sched();
> > +       }
> > 
> >  fail:
> >         kfree(old_shared);
> > 

------------------------------------------------------------------------

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index c7f1bc4f817c..2eda7bece401 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4707,7 +4707,7 @@ static void __init rcu_init_geometry(void)
  * Dump out the structure of the rcu_node combining tree associated
  * with the rcu_state structure referenced by rsp.
  */
-static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
+static void rcu_dump_rcu_node_tree(struct rcu_state *rsp)
 {
 	int level = 0;
 	struct rcu_node *rnp;
@@ -4720,11 +4720,18 @@ static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
 			pr_info(" ");
 			level = rnp->level;
 		}
-		pr_cont("%d:%d ^%d  ", rnp->grplo, rnp->grphi, rnp->grpnum);
+		pr_cont("%d:%d/%#lx/%#lx ^%d  ", rnp->grplo, rnp->grphi,
+			rnp->qsmask,
+			rnp->qsmaskinit | rnp->qsmaskinitnext, rnp->grpnum);
 	}
 	pr_cont("\n");
 }
 
+void rcu_dump_rcu_sched_tree(void)
+{
+	rcu_dump_rcu_node_tree(&rcu_sched_state);
+}
+
 void __init rcu_init(void)
 {
 	int cpu;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ