lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 24 Sep 2014 19:38:40 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Jesper Dangaard Brouer <brouer@...hat.com>,
	Linux Netdev List <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>,
	Alexander Duyck <alexander.h.duyck@...el.com>,
	Toke Høiland-Jørgensen <toke@...e.dk>,
	Florian Westphal <fw@...len.de>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	Dave Taht <dave.taht@...il.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	Daniel Borkmann <dborkman@...hat.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>
Subject: Re: [net-next PATCH 1/1 V4] qdisc: bulk dequeue support for qdiscs
 with TCQ_F_ONETXQUEUE

On Wed, Sep 24, 2014 at 7:12 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Wed, 2014-09-24 at 12:22 -0700, Eric Dumazet wrote:
>> On Wed, 2014-09-24 at 11:34 -0700, Tom Herbert wrote:
>> > >
>> > I believe drivers typically use skb->len for BQL tracking. Since
>> > bytelimit is based on BQL here, it might be more correct to use
>> > skb->len.
>
> Speaking of BQL, I wonder if we now should try to not wakeup queues as
> soon some room was made, and instead have a 50% threshold ?
>
> This would probably increase probability to have bulk dequeues ;)
>
It would be good to have data on that. In the absence of TSO, I've
seen BQL limits at around 30K for "standard" interrupt rates on 10G.
This should mean that ~15K becomes available every interrupt period
(the math is actually straightforward), so that should already have 10
packet batches which isn't bad!

It's also probably true that we can tradeoff batching for latency in
many ways-- more batching increase latency, less batching helps
latency. For instance, the interrupt rate can be modulated to balance
between latency and batching (CPU utilization).

Tom

>
> diff --git a/include/linux/dynamic_queue_limits.h b/include/linux/dynamic_queue_limits.h
> index 5621547d631b..c0be7ff5ae97 100644
> --- a/include/linux/dynamic_queue_limits.h
> +++ b/include/linux/dynamic_queue_limits.h
> @@ -83,6 +83,13 @@ static inline int dql_avail(const struct dql *dql)
>         return dql->adj_limit - dql->num_queued;
>  }
>
> +/* Returns true if a queue occupancy is less than half capacity */
> +static inline bool dql_half_avail(const struct dql *dql)
> +{
> +       return dql->adj_limit >= (dql->num_queued << 1);
> +}
> +
> +
>  /* Record number of completed objects and recalculate the limit. */
>  void dql_completed(struct dql *dql, unsigned int count);
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index c8e388e5fccc..1f7541284b32 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -2413,7 +2413,7 @@ static inline void netdev_tx_sent_queue(struct netdev_queue *dev_queue,
>         smp_mb();
>
>         /* check again in case another CPU has just made room avail */
> -       if (unlikely(dql_avail(&dev_queue->dql) >= 0))
> +       if (unlikely(dql_half_avail(&dev_queue->dql)))
>                 clear_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state);
>  #endif
>  }
> @@ -2448,7 +2448,7 @@ static inline void netdev_tx_completed_queue(struct netdev_queue *dev_queue,
>          */
>         smp_mb();
>
> -       if (dql_avail(&dev_queue->dql) < 0)
> +       if (!dql_half_avail(&dev_queue->dql))
>                 return;
>
>         if (test_and_clear_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state))
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ