[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180227.111306.790069240486489244.davem@davemloft.net>
Date: Tue, 27 Feb 2018 11:13:06 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: antoine.tenart@...tlin.com
Cc: ymarkman@...vell.com, mw@...ihalf.com, stefanc@...vell.com,
thomas.petazzoni@...e-electrons.com,
gregory.clement@...e-electrons.com,
miquel.raynal@...e-electrons.com, nadavh@...vell.com,
maxime.chevallier@...tlin.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next 2/3] net: mvpp2: adjust gso stop wake
thresholds
From: Antoine Tenart <antoine.tenart@...tlin.com>
Date: Mon, 26 Feb 2018 15:14:26 +0100
> From: Yan Markman <ymarkman@...vell.com>
>
> Adjust MVPP2_MAX_TSO_SEGS and stop_threshold/wake_threshold
> for better TXQ utilization and performance.
>
> Signed-off-by: Yan Markman <ymarkman@...vell.com>
> ---
> drivers/net/ethernet/marvell/mvpp2.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
> index 55300b1fe6c0..1a893ef70eab 100644
> --- a/drivers/net/ethernet/marvell/mvpp2.c
> +++ b/drivers/net/ethernet/marvell/mvpp2.c
> @@ -498,7 +498,7 @@
> * skb. As we need a maxium of two descriptors per fragments (1 header, 1 data),
> * multiply this value by two to count the maximum number of skb descs needed.
> */
> -#define MVPP2_MAX_TSO_SEGS 300
> +#define MVPP2_MAX_TSO_SEGS 100
> #define MVPP2_MAX_SKB_DESCS (MVPP2_MAX_TSO_SEGS * 2 + MAX_SKB_FRAGS)
>
> /* Dfault number of RXQs in use */
> @@ -5810,7 +5810,7 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
> txq_pcpu->tso_headers = NULL;
>
> txq_pcpu->stop_threshold = txq->size - MVPP2_MAX_SKB_DESCS;
> - txq_pcpu->wake_threshold = txq_pcpu->stop_threshold / 2;
> + txq_pcpu->wake_threshold = txq_pcpu->stop_threshold - 100;
>
This number 100 is a magic constant. If it is related to
MVPP2_MAX_TSO_SEGS, please use that define. Otherwise
define a new one which is descriptive.
Thank you.
Powered by blists - more mailing lists