lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Apr 2016 21:30:42 +0300
From:	Saeed Mahameed <saeedm@....mellanox.co.il>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Saeed Mahameed <saeedm@...lanox.com>,
	"David S. Miller" <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Or Gerlitz <ogerlitz@...lanox.com>,
	Tal Alon <talal@...lanox.com>,
	Tariq Toukan <tariqt@...lanox.com>,
	Eran Ben Elisha <eranbe@...lanox.com>,
	Achiad Shochat <achiad@...lanox.com>,
	Mel Gorman <mgorman@...hsingularity.net>,
	linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH net-next V2 05/11] net/mlx5e: Support RX multi-packet WQE
 (Striding RQ)

On Tue, Apr 19, 2016 at 7:25 PM, Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
> On Mon, 18 Apr 2016 07:17:13 -0700
> Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>> Another idea would be to have a way to control max number of order-5
>> pages that a port would be using.
>>
>> Since driver always own a ref on a order-5 pages, idea would be to
>> maintain a circular ring of up to XXX such pages, so that we can detect
>> an abnormal use and fallback to order-0 immediately.
>
> That is part of my idea with my page-pool proposal.  In the page-pool I
> want to have some watermark counter that can block/stop the OOM issue at
> this RX ring level.
>
> See slide 12 of presentation:
> http://people.netfilter.org/hawk/presentations/MM-summit2016/generic_page_pool_mm_summit2016.pdf
>

Cool Idea guys, and we already tested our own version of it,
we tried to recycle our own driver pages but we saw that the stack
took too long to release them, we had to work with 2X and sometimes 4X
pages pool per ring to be able to reuse recycled pages on every RX
packet on 50Gb line rate, but we dropped the Idea since 2X is too
much.

but definitely, this is the best way to go for all drivers, reusing
already dma mapped pages and significantly reducing dma operations for
the driver is a big win !
we are still considering such option as future optimization.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ