lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <52DD614D.5080604@citrix.com>
Date:	Mon, 20 Jan 2014 17:47:57 +0000
From:	Zoltan Kiss <zoltan.kiss@...rix.com>
To:	Wei Liu <wei.liu2@...rix.com>
CC:	<ian.campbell@...rix.com>, <xen-devel@...ts.xenproject.org>,
	<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<jonathan.davies@...rix.com>
Subject: Re: [PATCH net-next v4 8/9] xen-netback: Timeout packets in RX path

On 20/01/14 16:53, Wei Liu wrote:
>>>> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>>>>   		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>>>>   			unmap_timeout++;
>>>>   			schedule_timeout(msecs_to_jiffies(1000));
>>>> -			if (unmap_timeout > 9 &&
>>>> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
>>>
>>> This line is really too long. And what's the rationale behind this long
>>> expression?
>> It calculates how many times you should ditch the internal queue of
>> an another (maybe stucked) vif before Qdisc empties it's actual
>> content. After that there shouldn't be any mapped handle left, so we
>> should start printing these messages. Actually it should use
>> vif->dev->tx_queue_len, and yes, it is probably better to move it to
>> the beginning of the function into a new variable, and use that
>> here.
>>
>
> Why is relative to tx queue length?
>
> What's the meaning of drain_timeout multipled by the last part
> (DIV_ROUND_UP)?
>
> If you proposed to use vif->dev->tx_queue_len to replace DIV_ROUND_UP
> then ignore the above question. But I still don't understand the
> rationale behind this. Could you elaborate a bit more? Wouldn't
> rx_drain_timeout_msecs/1000 along suffice?

Here we want to avoid timeout messages if an skb can be legitimatly 
stucked somewhere else. As we discussed earlier, realisticly this could 
be an another vif's internal or QDisc queue. That another vif also has 
this rx_drain_timeout_msecs timeout, but now with Paul's recent changes 
the timer only ditches the internal queue. After that, the QDisc queue 
can put in worst case XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into 
that another vif's internal queue, so we need several rounds of such 
timeouts until we can be sure that no another vif should have skb's from 
us. We are not sending more skb's, so newly stucked packets are not 
interesting for us here.
But actually using the current vif's queue length is not relevant in 
this calculation, as it doesn't mean other vif's has the same. I think 
it is better to stick with XENVIF_QUEUE_LENGTH.
I've added this explanation as a comment and moved the calculation into 
a separate variable, so it doesn't cause such long lines.

Zoli
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ