lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 May 2011 11:37:51 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Denys Fedoryshchenko <denys@...p.net.lb>
Cc:	netdev@...r.kernel.org
Subject: Re: Bug, kernel panic, NULL dereference , cleanup_once /
 icmp_route_lookup.clone.19.clone / nat , 2.6.39-rc7-git11

Le mercredi 18 mai 2011 à 12:27 +0300, Denys Fedoryshchenko a écrit :
> On Wed, 18 May 2011 01:16:29 +0300, Denys Fedoryshchenko wrote:
> > Just got recently. 32Bit, PPPoE NAS, shapers, firewall, NAT
> > Kernel i mention in subject, 2.6.39-rc7-git11
> > If required i can give more information
> >
> > sharanal (sorry for ugly name) is libpcap based traffic analyser,
> > sure userspace
> >
>  Here is some info, i hope it will be a little useful
> 
>  (gdb)  l *(cleanup_once + 0x49)
>  0xc02e85cc is in cleanup_once (include/linux/list.h:88).
>  83       * This is only for internal list manipulation where we know
>  84       * the prev/next entries already!
>  85       */
>  86      static inline void __list_del(struct list_head * prev, struct 
>  list_head * next)
>  87      {
>  88              next->prev = prev;
>  89              prev->next = next;
>  90      }
>  91
>  92      /**
> 
>  (gdb)  l *(inet_getpeer + 0x2ab)
>  0xc02e8ae8 is in inet_getpeer (net/ipv4/inetpeer.c:530).
>  525             if (base->total >= inet_peer_threshold)
>  526                     /* Remove one less-recently-used entry. */
>  527                     cleanup_once(0, stack);
>  528
>  529             return p;
>  530     }
>  531
>  532     static int compute_total(void)
>  533     {
>  534             return v4_peers.total + v6_peers.total;
> 

I really begin to think we have a bug here...

In previous reports, I suggested to use slub_nomerge because I thought
one corruption from another kernel layer was going on.

(inetpeer was using 64 bytes objects). But now that inetpeer objects are
bigger and sit in another kmemcache, its bad news.

Could you try this, and eventually add some SLUB debugging stuff as
well ?



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ