lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1274256582.2766.5.camel@edumazet-laptop>
Date:	Wed, 19 May 2010 10:09:42 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Thomas Graf <tgraf@...hat.com>,
	Neil Horman <nhorman@...hat.com>, netdev@...r.kernel.org
Subject: Re: tun: Use netif_receive_skb instead of netif_rx

Le mercredi 19 mai 2010 à 17:57 +1000, Herbert Xu a écrit :
> Hi:
> 
> tun: Use netif_receive_skb instead of netif_rx
> 
> First a bit of history as I recall, Dave can correct me where
> he recalls differently :)
> 
> 1) There was netif_rx and everyone had to use that.
> 2) Everyone had to use that, including drivers/net/tun.c.
> 3) NAPI brings us netif_receive_skb.
> 4) About the same time people noticed that tun.c can cause wild
>    fluctuations in latency because of its use of netif_rx with IRQs
>    enabled.
> 5) netif_rx_ni was added to address this.
> 

6) netif_rx() pro is that packet processing is done while stack usage is
guaranteed to be low (from process_backlog, using a special softirq
stack, instead of current stack)

After your patch, tun will use more stack. Is it safe on all contexts ?

Another concern I have is about RPS.

netif_receive_skb() must be called from process_backlog() context, or
there is no guarantee the IPI will be sent if this skb is enqueued for
another cpu.

> However, netif_rx_ni
>  was really a bit of a roundabout way of
> injecting a packet if you think about it.  What ends up happening
> is that we always queue the packet into the backlog, and then
> immediately process it.  Which is what would happen if we simply
> called netif_receive_skb directly.
> 
> So this patch just does the obvious thing and makes tun.c call
> netif_receive_skb, albeit through the netif_receive_skb_ni wrapper
> which does the necessary things for calling it in process context.
> 
> Now apart from potential performance gains from eliminating
> unnecessary steps in the process, this has the benefit of keeping
> the process context for the packet processing.  This is needed
> by cgroups to shape network traffic based on the original process.
> 
> Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
> 
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index 4326520..0eed49f 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -667,7 +667,7 @@ static __inline__ ssize_t tun_get_user(struct tun_struct *tun,
>  		skb_shinfo(skb)->gso_segs = 0;
>  	}
>  
> -	netif_rx_ni(skb);
> +	netif_receive_skb_ni(skb);
>  
>  	tun->dev->stats.rx_packets++;
>  	tun->dev->stats.rx_bytes += len;
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index fa8b476..34bb405 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -1562,6 +1562,18 @@ extern int		netif_rx(struct sk_buff *skb);
>  extern int		netif_rx_ni(struct sk_buff *skb);
>  #define HAVE_NETIF_RECEIVE_SKB 1
>  extern int		netif_receive_skb(struct sk_buff *skb);
> +
> +static inline int netif_receive_skb_ni(struct sk_buff *skb)
> +{
> +	int err;
> +
> +	local_bh_disable();
> +	err = netif_receive_skb(skb);
> +	local_bh_enable();
> +
> +	return err;
> +}
> +
>  extern gro_result_t	dev_gro_receive(struct napi_struct *napi,
>  					struct sk_buff *skb);
>  extern gro_result_t	napi_skb_finish(gro_result_t ret, struct sk_buff *skb);
> 
> Cheers,


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ