[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130904131459.GO14104@zion.uk.xensource.com>
Date: Wed, 4 Sep 2013 14:14:59 +0100
From: Wei Liu <wei.liu2@...rix.com>
To: David Vrabel <david.vrabel@...rix.com>
CC: Wei Liu <wei.liu2@...rix.com>, <xen-devel@...ts.xen.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Ian Campbell <ian.campbell@...rix.com>,
<netdev@...r.kernel.org>, <msw@...zon.com>, <annie.li@...cle.com>
Subject: Re: [PATCH] xen-netback: count number required slots for an skb more
carefully
On Wed, Sep 04, 2013 at 12:48:15PM +0100, David Vrabel wrote:
> On 03/09/13 22:53, Wei Liu wrote:
> > On Tue, Sep 03, 2013 at 06:29:50PM +0100, David Vrabel wrote:
> >> From: David Vrabel <david.vrabel@...rix.com>
> >>
> >> When a VM is providing an iSCSI target and the LUN is used by the
> >> backend domain, the generated skbs for direct I/O writes to the disk
> >> have large, multi-page skb->data but no frags.
> >>
> >> With some lengths and starting offsets, xen_netbk_count_skb_slots()
> >> would be one short because the simple calculation of
> >> DIV_ROUND_UP(skb_headlen(), PAGE_SIZE) was not accounting for the
> >> decisions made by start_new_rx_buffer() which does not guarantee
> >> responses are fully packed.
> >>
> >> For example, a skb with length < 2 pages but which spans 3 pages would
> >> be counted as requiring 2 slots but would actually use 3 slots.
> >>
> >> skb->data:
> >>
> >> | 1111|222222222222|3333 |
> >>
> >> Fully packed, this would need 2 slots:
> >>
> >> |111122222222|22223333 |
> >>
> >> But because the 2nd page wholy fits into a slot it is not split across
> >> slots and goes into a slot of its own:
> >>
> >> |1111 |222222222222|3333 |
> >>
> >> Miscounting the number of slots means netback may push more responses
> >> than the number of available requests. This will cause the frontend
> >> to get very confused and report "Too many frags/slots". The frontend
> >> never recovers and will eventually BUG.
> >>
> >> Fix this by counting the number of required slots more carefully. In
> >> xen_netbk_count_skb_slots(), more closely follow the algorithm used by
> >> xen_netbk_gop_skb() by introducing xen_netbk_count_frag_slots() which
> >> is the dry-run equivalent of netbk_gop_frag_copy().
> >>
> >
> > Phew! So this is backend miscounting bug. I thought it was a frontend
> > bug so it didn't ring a bell when we had our face-to-face discussion,
> > sorry. :-(
> >
> > This bug was discussed back in July among Annie, Matt, Ian and I. We
> > finally agreed to take Matt's solution. Matt agreed to post final
> > version within a week but obviously he's too busy to do so. I was away
> > so I didn't follow closely. Eventually it fell through the crack. :-(
>
> I think I prefer fixing the counting for backporting to stable kernels.
The original patch has coding style change. Sans that contextual change
it's not a very long patch.
> Xi's approach of packing the ring differently is a change in frontend
> visible behaviour and seems more risky. e.g., possible performance
> impact so I would like to see some performance analysis of that approach.
>
With Xi's approach it is more efficient for backend to process. As we
now use one less grant copy operation which means we copy the same
amount of data with less grant ops.
>From frontend's PoV I think the impact is minimal. Frontend is involved
in assembling the packets. It only takes what's in the ring and chain
them together. The operation involves copying so far is the
__pskb_pull_tail which happens a) in rare case when there's more frags
than frontend's MAX_SKB_FRAGS, b) when pull_to > skb_headlen which
happens. With Xi's change the rare case a) will even be rarer than
before as we use less slots. b) happens the same as it happens before
Xi's change, because the pull is guarded by "if (pull_to >
skb_headlen(skb))" and Xi's change doesn't affect skb_headlen.
So overall I don't see obvious downside.
Wei.
> David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists