[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121220155001.538bbdb0@nehalam.linuxnetplumber.net>
Date: Thu, 20 Dec 2012 15:50:01 -0800
From: Stephen Hemminger <shemminger@...tta.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Paul Moore <pmoore@...hat.com>, Jason Wang <jasowang@...hat.com>,
netdev@...r.kernel.org
Subject: Re: TUN problems (regression?)
On Thu, 20 Dec 2012 15:38:17 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2012-12-20 at 18:16 -0500, Paul Moore wrote:
> > [CC'ing netdev in case this is a known problem I just missed ...]
> >
> > Hi Jason,
> >
> > I started doing some more testing with the multiqueue TUN changes and I ran
> > into a problem when running tunctl: running it once w/o arguments works as
> > expected, but running it a second time results in failure and a
> > kmem_cache_sanity_check() failure. The problem appears to be very repeatable
> > on my test VM and happens independent of the LSM/SELinux fixup patches.
> >
> > Have you seen this before?
> >
>
> Obviously code in tun_flow_init() is wrong...
>
> static int tun_flow_init(struct tun_struct *tun)
> {
> int i;
>
> tun->flow_cache = kmem_cache_create("tun_flow_cache",
> sizeof(struct tun_flow_entry), 0, 0,
> NULL);
> if (!tun->flow_cache)
> return -ENOMEM;
> ...
> }
>
>
> I have no idea why we would need a kmem_cache per tun_struct,
> and why we even need a kmem_cache.
Normally flow malloc/free should be good enough.
It might make sense to use private kmem_cache if doing hlist_nulls.
Acked-by: Stephen Hemminger <shemminger@...tta.com>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists