lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 2 Feb 2015 15:56:49 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Govindarajulu Varadarajan' <_govind@....com>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"ssujith@...co.com" <ssujith@...co.com>,
	"benve@...co.com" <benve@...co.com>,
	"edumazet@...gle.com" <edumazet@...gle.com>,
	"ben@...adent.org.uk" <ben@...adent.org.uk>
Subject: RE: [PATCH net-next 0/4] enic: improve rq buff allocation and
 reduce dma mapping

From: Govindarajulu Varadarajan
> The following series tries to address these two problem in rq buff allocation.
> 
> * Memory wastage because of large 9k allocation using kmalloc:
>   For 9k mtu buffer, netdev_alloc_skb_ip_align internally calls kmalloc for
>   size > 4096. In case of 9k buff, kmalloc returns pages for order 2, 16k.
>   And we use only ~9k of 16k. 7k memory wasted. Using the frag the frag
>   allocator in patch 1/2, we can allocate three 9k buffs in a 32k page size.
>   Typical enic configuration has 8 rq, and desc ring of size 4096.
>   Thats 8 * 4096 * (16*1024) = 512 MB. Using this frag allocator:
>   8 * 4096 * (32*1024/3) = 341 MB. Thats 171 MB of memory save.
> 
> * frequent dma_map() calls:
>   we call dma_map() for every buff we allocate. When iommu is on, This is very
>   cpu time consuming. From my testing, most of the cpu cycles are wasted
>   spinning on spin_lock_irqsave(&iovad->iova_rbtree_lock, flags) in
>   intel_map_page() .. -> ..__alloc_and_insert_iova_range()
> 
>   With this patch, we call dma_map() once for 32k page. i.e once for every three
>   9k desc, and once every twenty 1500 bytes desc.

Two questions:
1) How are you handling the skb's true_size ?
2) Memory fragmentation could easily make the allocation of 32k fail.

	David

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ