lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 May 2015 10:22:24 +0000
From:	Joao Martins <Joao.Martins@...lab.eu>
To:	David Vrabel <david.vrabel@...rix.com>,
	"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"wei.liu2@...rix.com" <wei.liu2@...rix.com>,
	"ian.campbell@...rix.com" <ian.campbell@...rix.com>,
	"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>
Subject: Re: [Xen-devel] [RFC PATCH 13/13] xen-netfront: implement RX
 persistent grants


On 18 May 2015, at 18:04, David Vrabel <david.vrabel@...rix.com> wrote:
> On 12/05/15 18:18, Joao Martins wrote:
>> It allows a newly allocated skb to reuse the gref taken from the
>> pending_ring, which means xennet will grant the pages once and release
>> them only when freeing the device. It changes how netfront handles news
>> skbs to be able to reuse the allocated pages similarly to how netback
>> is already doing for the netback TX path.
>> 
>> alloc_rx_buffers() will consume pages from the pending_ring to
>> allocate new skbs. When responses are handled we will move the grants
>> from the grant_rx to the pending_grants. The latter is a shadow ring
>> that keeps all grants belonging to inflight skbs. Finally chaining
>> all skbs ubuf_info together to finally pass the packet up to the
>> network stack. We make use of SKBTX_DEV_ZEROCOPY to get notified
>> once the skb is freed to be able to reuse pages. On the destructor
>> callback we will then add the grant to the pending_ring.
>> 
>> The only catch about this approach is: when we orphan frags, there
>> will be a memcpy on skb_copy_ubufs() (if frags bigger than 0).
>> Depending on the CPU and number of queues this leads to a performance
>> drop of between 7-11%. For this reason, SKBTX_DEV_ZEROCOPY skbs will
>> only be used with persistent grants.
> 
> This means that skbs are passed further up the stack while they are
> still granted to the backend.

__pskb_pull_tail copies to skb->data and unref the frag if no data in
frag is remaining pull. When the packet is then delivered to the stack (in 
netif_receive_skb) skb_orphan_frags will be called where it will allocate
pages for frags and memcpy to them (from the granted pages). The zerocopy
callback is then called which then releases the grants. So, in the end the
granted buffers aren't passed up to the protocol stack, but could be the case
that it changes in the future like you said. Would you prefer memcpy explicitly
instead of using SKBTX_DEV_ZEROCOPY?

> I think this makes it too difficult to validate that the backend can't
> fiddle with the skb frags inappropriately (both now in the the future
> when other changes in the network stack are made).

But wouldn't this the case for netback TX as well, since it uses the similar
approach?--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ