lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130905102748.GX14104@zion.uk.xensource.com>
Date:	Thu, 5 Sep 2013 11:27:48 +0100
From:	Wei Liu <wei.liu2@...rix.com>
To:	David Vrabel <david.vrabel@...rix.com>
CC:	Wei Liu <wei.liu2@...rix.com>, <xen-devel@...ts.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Boris Ostrovsky <boris.ostrovsky@...cle.com>,
	Ian Campbell <ian.campbell@...rix.com>,
	<netdev@...r.kernel.org>, <msw@...zon.com>, <annie.li@...cle.com>
Subject: Re: [PATCH] xen-netback: count number required slots for an skb more
 carefully

On Thu, Sep 05, 2013 at 11:12:11AM +0100, David Vrabel wrote:
> On 04/09/13 16:44, Wei Liu wrote:
> > On Wed, Sep 04, 2013 at 03:02:01PM +0100, David Vrabel wrote:
> > [...]
> >>>>
> >>>> I think I prefer fixing the counting for backporting to stable kernels.
> >>>
> >>> The original patch has coding style change. Sans that contextual change
> >>> it's not a very long patch.
> >>
> >> The size of the patch isn't the main concern for backport-ability.  It's
> >> the frontend visible changes and thus any (unexpected) impacts on
> >> frontends -- this is especially important as only a small fraction of
> >> frontends in use will be tested with these changes.
> >>
> >>>>  Xi's approach of packing the ring differently is a change in frontend
> >>>> visible behaviour and seems more risky. e.g., possible performance
> >>>> impact so I would like to see some performance analysis of that approach.
> >>>>
> >>>
> >>> With Xi's approach it is more efficient for backend to process. As we
> >>> now use one less grant copy operation which means we copy the same
> >>> amount of data with less grant ops.
> >>
> >> It think it uses more grant ops because the copies of the linear
> >> portion are in chunks that do not cross source page boundaries.
> >>
> >> i.e., in netbk_gop_skb():
> >>
> >> 	data = skb->data;
> >> 	while (data < skb_tail_pointer(skb)) {
> >> 		unsigned int offset = offset_in_page(data);
> >> 		unsigned int len = PAGE_SIZE - offset;
> >>                 [...]
> >>
> >> It wasn't clear from the patch that this had been considered and that
> >> any extra space needed in the grant op array was made available.
> >>
> > 
> > If I'm not mistaken the grant op array is already enormous. See the
> > comment in struct xen_netbk for grant_copy_op. The case that a buffer
> > straddles two slots was taken into consideration long ago -- that's
> > why you don't see any comment or code change WRT that..
> 
> I'm not convinced that even that is enough for the current
> implementation in the (very) worse case.
> 
> Consider a skb with 8 frags all 512 in length.  The linear data will be
> placed into 1 slot, and the frags will be packed into 1 slot so 9 grant
> ops and 2 slots.
> 

That is still less than 8 * 2 = 16 slots in the grant copy array, right?
Basically each paged buffer is allowed for 2 slots in the grant copy
array.

> I definitely think we do not want to potentially regress any further in
> this area.
> 

Sure.

> David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ