lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 24 Sep 2014 10:43:31 -0400
From:	David L Stevens <david.stevens@...cle.com>
To:	David Miller <davem@...emloft.net>
CC:	Raghuram.Kothakota@...cle.com, netdev@...r.kernel.org
Subject: Re: [PATCHv6 net-next 1/3] sunvnet: upgrade to VIO protocol version
 1.6



On 09/23/2014 02:44 PM, David Miller wrote:
> From: David L Stevens <david.stevens@...cle.com>
> Date: Tue, 23 Sep 2014 12:49:27 -0400
> 
>> Actually, that's exactly what I've been working on for the last few
>> days. I hope to post this soon. Currently, I allow for misaligned
>> packets by reallocating the skbs with the proper alignment, skip and
>> length restrictions, so the code can handle either, but still copies
>> most of the time. Once I have all the kinks worked out there, I was
>> planning to possibly make *all* skb allocations on LDOMs and/or SPARC64 fit
>> those requirements, since they are compatible with the existing alignments
>> and would allow using the HV copy in any case.
> 
> You should be able to avoid the copy on TX almost all of the time.
> 
> If you do a skb_push(skb, VNET_PACKET_SKIP) (and initialize with some
> garbage bytes) it ought to be aligned.

I can't touch the data buffer (head or tail) without getting a COW copy,
which is often also misaligned, but the code I have now is mapping the
existing head and tail as long as they are part of the skb (ie, headroom
and tailroom to fit it) and with that, I can avoid copies almost all the
time in TCP.

ICMP and ARP still copy usually, but aren't generally high-volume. I didn't
try out UDP yet.

Initial testing shows a ~25% reduction in throughput for the default MTU
(from ~1Gbps to ~750Mbps), but with 64K MTU, I get a ~25% increase in throughput--
from ~7.5Gbps with the original patches to 9.6Gbps with the no-copy, but
remapping, allocating, freeing and unmapping on demand.

Of course the reduction in throughput on the low end eliminates static tx
buffers so allows scaling up the number of LDOMs per vswitch without any
penalty in memory, instead of the n^2 growth before.

If the current static buffer allocation is "good enough," despite its poor
scaling, then we might consider a hybrid where we essentially use the old
code for smaller packets, and direct mapping for larger ones. I have some
other ideas to experiment with, too.

							+-DLS
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ