[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD029C5E2@AMSPEX01CL01.citrite.net>
Date: Thu, 27 Mar 2014 12:29:27 +0000
From: Paul Durrant <Paul.Durrant@...rix.com>
To: Ian Campbell <Ian.Campbell@...rix.com>
CC: "xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Wei Liu <wei.liu2@...rix.com>,
Sander Eikelenboom <linux@...elenboom.it>
Subject: RE: [PATCH net 2/3] xen-netback: worse-case estimate in
xenvif_rx_action is underestimating
> -----Original Message-----
> From: Ian Campbell
> Sent: 27 March 2014 12:28
> To: Paul Durrant
> Cc: xen-devel@...ts.xen.org; netdev@...r.kernel.org; Wei Liu; Sander
> Eikelenboom
> Subject: Re: [PATCH net 2/3] xen-netback: worse-case estimate in
> xenvif_rx_action is underestimating
>
> On Thu, 2014-03-27 at 12:23 +0000, Paul Durrant wrote:
> > The worse-case estimate for skb ring slot usage in xenvif_rx_action()
> > fails to take fragment page_offset into account. The page_offset does,
> > however, affect the number of times the fragmentation code calls
> > start_new_rx_buffer() (i.e. consume another slot) and the worse-case
> > should assume that will always return true. This patch adds the page_offset
> > into the DIV_ROUND_UP for each frag.
>
> At least for the copying mode wasn't the idea that you would copy to the
> start of the page, so the offset wasn't relevant? IOW is the real issue
> that start_new_rx_buffer is/was too aggressive?
>
> Now that we do mapping though I suspect the offset becomes relevant
> again here and there is a 1:1 mapping from slots to frags again.
>
We're always in copying mode. This is guest receive side :-)
> (I could have sworn David V got rid of all this precalculating stuff.)
>
He did modify it. I got rid of it in favour of the best-case and worse-case estimations.
Paul
> >
> > Signed-off-by: Paul Durrant <paul.durrant@...rix.com>
> > Cc: Ian Campbell <ian.campbell@...rix.com>
> > Cc: Wei Liu <wei.liu2@...rix.com>
> > Cc: Sander Eikelenboom <linux@...elenboom.it>
> > ---
> > drivers/net/xen-netback/netback.c | 12 +++++++++++-
> > 1 file changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > index befc413..ac35489 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -492,8 +492,18 @@ static void xenvif_rx_action(struct xenvif *vif)
> > PAGE_SIZE);
> > for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> > unsigned int size;
> > + unsigned int offset;
> > +
> > size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
> > - max_slots_needed += DIV_ROUND_UP(size,
> PAGE_SIZE);
> > + offset = skb_shinfo(skb)->frags[i].page_offset;
> > +
> > + /* For a worse-case estimate we need to factor in
> > + * the fragment page offset as this will affect the
> > + * number of times xenvif_gop_frag_copy() will
> > + * call start_new_rx_buffer().
> > + */
> > + max_slots_needed += DIV_ROUND_UP(offset + size,
> > + PAGE_SIZE);
> > }
> > if (skb_is_gso(skb) &&
> > (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
>
Powered by blists - more mailing lists