[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52261A12.3060203@wwwdotorg.org>
Date: Tue, 03 Sep 2013 11:19:14 -0600
From: Stephen Warren <swarren@...dotorg.org>
To: Thomas Graf <tgraf@...g.ch>
CC: davem@...emloft.net, netdev@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Fabio Estevam <festevam@...il.com>
Subject: Re: [PATCH v2] ipv6: Don't depend on per socket memory for neighbour
discovery messages
On 09/03/2013 05:37 AM, Thomas Graf wrote:
> Allocating skbs when sending out neighbour discovery messages
> currently uses sock_alloc_send_skb() based on a per net namespace
> socket and thus share a socket wmem buffer space.
>
> If a netdevice is temporarily unable to transmit due to carrier
> loss or for other reasons, the queued up ndisc messages will cosnume
> all of the wmem space and will thus prevent from any more skbs to
> be allocated even for netdevices that are able to transmit packets.
>
> The number of neighbour discovery messages sent is very limited,
> use of alloc_skb() bypasses the socket wmem buffer size enforcement
> while the manual call to skb_set_owner_w() maintains the socket
> reference needed for the IPv6 output path.
>
> This patch has orginally been posted by Eric Dumazet in a modified
> form.
Tested-by: Stephen Warren <swarren@...dia.com>
Although I do note something slightly odd:
next-20130830 had an issue, and reverting V1 of this patch solved it.
However, in next-20130903, if I revert the revert of V1 of this patch, I
don't see any issue; it appears that the problem was some interaction
between V1 of this patch and something else in next-20130830.
Either way, this patch doesn't seem to introduce any issue when applied
on top of either next-20130830 with V1 reverted, or on top of
next-20130903, so it's fine.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists