lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 23 Aug 2014 16:34:50 -0700
From:	Alexander Duyck <alexander.duyck@...il.com>
To:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
CC:	therbert@...gle.com, jhs@...atatu.com, hannes@...essinduktion.org,
	edumazet@...gle.com, jeffrey.t.kirsher@...el.com,
	rusty@...tcorp.com.au
Subject: Re: [PATCH 1/3] net: Add ops->ndo_xmit_flush()

On 08/23/2014 01:28 PM, David Miller wrote:
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 7e2b0b8..1d05932 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -782,6 +782,19 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
>   *        (can also return NETDEV_TX_LOCKED iff NETIF_F_LLTX)
>   *	Required can not be NULL.
>   *
> + * void (*ndo_xmit_flush)(struct net_device *dev, u16 queue);
> + *	A driver implements this function when it wishes to support
> + *	deferred TX queue flushing.  The idea is that the expensive
> + *	operation to trigger TX queue processing can be done after
> + *	N calls to ndo_start_xmit rather than being done every single
> + *	time.  In this regime ndo_start_xmit will be called one or more
> + *	times, and then a final ndo_xmit_flush call will be made to
> + *	have the driver tell the device about the new pending TX queue
> + *	entries.  The kernel keeps track of which queues need flushing
> + *	by monitoring skb->queue_mapping of the packets it submits to
> + *	ndo_start_xmit.  This is the queue value that will be passed
> + *	to ndo_xmit_flush.
> + *
>   * u16 (*ndo_select_queue)(struct net_device *dev, struct sk_buff *skb,
>   *                         void *accel_priv, select_queue_fallback_t fallback);
>   *	Called to decide which queue to when device supports multiple
> @@ -1005,6 +1018,7 @@ struct net_device_ops {
>  	int			(*ndo_stop)(struct net_device *dev);
>  	netdev_tx_t		(*ndo_start_xmit) (struct sk_buff *skb,
>  						   struct net_device *dev);
> +	void			(*ndo_xmit_flush)(struct net_device *dev, u16 queue);
>  	u16			(*ndo_select_queue)(struct net_device *dev,
>  						    struct sk_buff *skb,
>  						    void *accel_priv,
> @@ -3358,6 +3372,27 @@ int __init dev_proc_init(void);
>  #define dev_proc_init() 0
>  #endif
>  
> +static inline netdev_tx_t __netdev_start_xmit(const struct net_device_ops *ops,
> +					      struct sk_buff *skb, struct net_device *dev)
> +{
> +	netdev_tx_t ret;
> +	u16 q;
> +
> +	q = skb->queue_mapping;
> +	ret = ops->ndo_start_xmit(skb, dev);
> +	if (ops->ndo_xmit_flush)
> +		ops->ndo_xmit_flush(dev, q);
> +
> +	return ret;
> +}
> +

What about the case of ndo_start_xmit returning something like
NETDEV_TX_BUSY?  I am pretty sure you shouldn't be flushing unless
something has been enqueued.  You might want to add a new return that
specified that a frame has been enqueued but not flushed and then start
down the ndo_xmit_flush path.  Maybe something like NETDEV_TX_DEFERRED.

You might even want to have a return from ndo_xmit_flush just to cover
any oddball cases like a lockless Tx where we might not be able to flush
because the queue is already being flushed by another entity.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists