[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090430124036.GN14729@mail.wantstofly.org>
Date: Thu, 30 Apr 2009 14:40:36 +0200
From: Lennert Buytenhek <buytenh@...tstofly.org>
To: Jarek Poplawski <jarkao2@...il.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: oopses since "net: Optimize memory usage when splicing from sockets"
On Thu, Apr 30, 2009 at 10:43:21AM +0200, Jarek Poplawski wrote:
> > Since 4fb669948116d928ae44262ab7743732c574630d ("net: Optimize memory
> > usage when splicing from sockets.") I'm seeing this oops (e.g. in
> > 2.6.30-rc3) when splicing from a TCP socket to /dev/null on a driver
> > (mv643xx_eth) that uses LRO in the skb mode (lro_receive_skb) rather
> > than the frag mode:
> ...
> > addr2line suggests skb->sk is NULL in linear_to_page():
> >
> >
> > static inline struct page *linear_to_page(struct page *page, unsigned int *len,
> > unsigned int *offset,
> > struct sk_buff *skb)
> > {
> > struct sock *sk = skb->sk;
> > struct page *p = sk->sk_sndmsg_page; <========
> > unsigned int off;
> >
> > if (!p) {
> >
> >
> > When we get here, skb->sk has apparently already been dropped, leading
> > to a NULL pointer deref. Backing out the offending commit makes the
> > oops go away (as does converting the driver to lro frag rx, but that
> > destroys routing performance).
> >
> > Thoughts? Should we just fall back to plain alloc_pages() if skb->sk
> > is NULL, or should have still have the socket reference when we get here?
>
> Hmm... I definitely need more time for this, but the first and maybe
> wrong impression is this is an skb from the frag_list. There are
> probably better ways of fixing it properly, but here is a quick hack
> for the beginning (alas not even compile-tested at the moment).
With your patch, at least the oops is gone, and I guess it makes
sense and looks correct, so:
Tested-by: Lennert Buytenhek <buytenh@...tstofly.org>
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists