[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160217225958.GA31113@1wt.eu>
Date: Wed, 17 Feb 2016 23:59:58 +0100
From: Willy Tarreau <w@....eu>
To: Gregory CLEMENT <gregory.clement@...e-electrons.com>
Cc: "David S. Miller" <davem@...emloft.net>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
Florian Fainelli <f.fainelli@...il.com>,
Jason Cooper <jason@...edaemon.net>,
Andrew Lunn <andrew@...n.ch>,
Sebastian Hesselbarth <sebastian.hesselbarth@...il.com>,
linux-arm-kernel@...ts.infradead.org,
Lior Amsalem <alior@...vell.com>,
Nadav Haklai <nadavh@...vell.com>,
Marcin Wojtas <mw@...ihalf.com>,
Simon Guinot <simon.guinot@...uanux.org>,
Russell King - ARM Linux <linux@....linux.org.uk>,
Timor Kardashov <timork@...vell.com>,
Sebastian Careba <nitroshift@...oo.com>
Subject: Re: [PATCH v2 net-next 0/8] API set for HW Buffer management
Hi Gregory,
On Tue, Feb 16, 2016 at 04:33:35PM +0100, Gregory CLEMENT wrote:
> Hello,
>
> A few weeks ago I sent a proposal for a API set for HW Buffer
> management, to have a better view of the motivation for this API see
> the cover letter of this proposal:
> http://thread.gmane.org/gmane.linux.kernel/2125152
>
> Since this version I took into account the review from Florian:
> - The hardware buffer management helpers are no more built by default
> and now depend on a hidden config symbol which has to be selected
> by the driver if needed
> - The hwbm_pool_refill() and hwbm_pool_add() now receive a gfp_t as
> argument allowing the caller to specify the flag it needs.
> - buf_num is now tested to ensure there is no wrapping
> - A spinlock has been added to protect the hwbm_pool_add() function in
> SMP or irq context.
>
> I also used pr_warn instead of pr_debug in case of errors.
>
> I fixed the mvneta implementation by returning the buffer to the pool
> at various place instead of ignoring it.
>
> About the series itself I tried to make this series easier to merge:
> - Squashed "bus: mvenus-mbus: Fix size test for
> mvebu_mbus_get_dram_win_info" into bus: mvebu-mbus: provide api for
> obtaining IO and DRAM window information.
> - Added my signed-otf-by on all the patches as submitter of the series.
> - Renamed the dts patches with the pattern "ARM: dts: platform:"
> - Removed the patch "ARM: mvebu: enable SRAM support in
> mvebu_v7_defconfig" of this series and already applied it
> - Rodified the order of the patches.
>
> In order to ease the test the branch mvneta-BM-framework-v2 is
> available at git@...hub.com:MISL-EBU-System-SW/mainline-public.git.
Well, I tested this patch series on top of latest master (from today)
on my fresh new clearfog board. I compared carefully with and without
the patchset. My workload was haproxy receiving connections and forwarding
them to my PC via the same port. I tested both with short connections
(HTTP GET of an empty file) and long ones (1 MB or more). No trouble
was detected at all, which is pretty good. I noticed a very tiny
performance drop which is more noticeable on short connections (high
packet rates), my forwarded connection rate went down from 17500/s to
17300/s. But I have not checked yet what can be tuned when using the
BM, nor did I compare CPU usage. I remember having run some tests in
the past, I guess it was on the XP-GP board, and noticed that the BM
could save a significant amount of CPU and improve cache efficiency,
so if this is the case here, we don't really care about a possible 1%
performance drop.
I'll try to provide more results as time permits.
In the mean time if you want (or plan to submit a next batch), feel
free to add a Tested-by: Willy Tarreau <w@....eu>.
cheers,
Willy
Powered by blists - more mailing lists