[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4F2BEEA70200007800070E81@nat28.tlf.novell.com>
Date: Fri, 03 Feb 2012 13:26:47 +0000
From: "Jan Beulich" <JBeulich@...e.com>
To: "Wei Liu" <wei.liu2@...rix.com>, "Laszlo Ersek" <lersek@...hat.com>
Cc: "Ian Campbell" <Ian.Campbell@...rix.com>,
"jeremy@...p.org" <jeremy@...p.org>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
"Konrad Rzeszutek Wilk" <konrad.wilk@...cle.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen-netfront:
correct MAX_TX_TARGET calculation.
>>> On 03.02.12 at 13:59, Laszlo Ersek <lersek@...hat.com> wrote:
> On 02/03/12 13:27, Laszlo Ersek wrote:
>> On 01/27/12 11:36, Wei Liu wrote:
>
>>> As the tx structure is bigger than rx structure. I think scratch space
>>> size is likely to shrink after correction.
>>
>> It also seems to affect the netfront_tx_slot_available() function,
>> making it stricter (likely). Before the patch, the function may have
>> reported available slots when there were none, causing spurious(?) queue
>> wakeups in xennet_maybe_wake_tx(), and not stopping the queue in
>> xennet_start_xmit() when it should have(?).
>
> (Eyeballing the source makes me think
>
> NET_TX_RING_SIZE == (4096 - 16 - 48) / (5 * 4) == 201
> NET_RX_RING_SIZE == (4096 - 16 - 48) / (4 * 4) == 252
NET_TX_RING_SIZE == (4096 - 16 - 48) / (6 * 2) == 336
NET_RX_RING_SIZE == (4096 - 16 - 48) / (4 * 2) == 504
and with {R,T}X_MAX_TARGET capped to 256 the change really is
benign without multi-page ring support afaict.
Jan
> but I didn't try to verify them.)
>
> Laszlo
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@...ts.xensource.com
> http://lists.xensource.com/xen-devel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists