[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210110180642.GH1551@shell.armlinux.org.uk>
Date: Sun, 10 Jan 2021 18:06:42 +0000
From: Russell King - ARM Linux admin <linux@...linux.org.uk>
To: stefanc@...vell.com
Cc: netdev@...r.kernel.org, thomas.petazzoni@...tlin.com,
davem@...emloft.net, nadavh@...vell.com, ymarkman@...vell.com,
linux-kernel@...r.kernel.org, kuba@...nel.org, mw@...ihalf.com,
andrew@...n.ch, atenart@...nel.org
Subject: Re: [PATCH RFC net-next 11/19] net: mvpp2: add flow control RXQ and
BM pool config callbacks
On Sun, Jan 10, 2021 at 05:30:15PM +0200, stefanc@...vell.com wrote:
> From: Stefan Chulski <stefanc@...vell.com>
>
> This patch did not change any functionality.
> Added flow control RXQ and BM pool config callbacks that would be
> used to configure RXQ and BM pool thresholds.
> APIs also will disable/enable RXQ and pool Flow Control polling.
>
> In this stage BM pool and RXQ has same stop/start thresholds
> defined in code.
> Also there are common thresholds for all RXQs.
>
> Signed-off-by: Stefan Chulski <stefanc@...vell.com>
> ---
> drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 51 +++++-
> drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 169 ++++++++++++++++++++
> 2 files changed, 216 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> index 4d58af6..0ba0598 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> @@ -763,10 +763,53 @@
> ((kb) * 1024 - MVPP2_TX_FIFO_THRESHOLD_MIN)
>
> /* MSS Flow control */
> -#define MSS_SRAM_SIZE 0x800
> -#define FC_QUANTA 0xFFFF
> -#define FC_CLK_DIVIDER 0x140
> -#define MSS_THRESHOLD_STOP 768
> +#define MSS_SRAM_SIZE 0x800
> +#define MSS_FC_COM_REG 0
> +#define FLOW_CONTROL_ENABLE_BIT BIT(0)
> +#define FLOW_CONTROL_UPDATE_COMMAND_BIT BIT(31)
> +#define FC_QUANTA 0xFFFF
> +#define FC_CLK_DIVIDER 0x140
> +
> +#define MSS_BUF_POOL_BASE 0x40
> +#define MSS_BUF_POOL_OFFS 4
> +#define MSS_BUF_POOL_REG(id) (MSS_BUF_POOL_BASE \
> + + (id) * MSS_BUF_POOL_OFFS)
> +
> +#define MSS_BUF_POOL_STOP_MASK 0xFFF
> +#define MSS_BUF_POOL_START_MASK (0xFFF << MSS_BUF_POOL_START_OFFS)
> +#define MSS_BUF_POOL_START_OFFS 12
> +#define MSS_BUF_POOL_PORTS_MASK (0xF << MSS_BUF_POOL_PORTS_OFFS)
> +#define MSS_BUF_POOL_PORTS_OFFS 24
> +#define MSS_BUF_POOL_PORT_OFFS(id) (0x1 << \
> + ((id) + MSS_BUF_POOL_PORTS_OFFS))
> +
> +#define MSS_RXQ_TRESH_BASE 0x200
> +#define MSS_RXQ_TRESH_OFFS 4
> +#define MSS_RXQ_TRESH_REG(q, fq) (MSS_RXQ_TRESH_BASE + (((q) + (fq)) \
> + * MSS_RXQ_TRESH_OFFS))
> +
> +#define MSS_RXQ_TRESH_START_MASK 0xFFFF
> +#define MSS_RXQ_TRESH_STOP_MASK (0xFFFF << MSS_RXQ_TRESH_STOP_OFFS)
> +#define MSS_RXQ_TRESH_STOP_OFFS 16
> +
> +#define MSS_RXQ_ASS_BASE 0x80
> +#define MSS_RXQ_ASS_OFFS 4
> +#define MSS_RXQ_ASS_PER_REG 4
> +#define MSS_RXQ_ASS_PER_OFFS 8
> +#define MSS_RXQ_ASS_PORTID_OFFS 0
> +#define MSS_RXQ_ASS_PORTID_MASK 0x3
> +#define MSS_RXQ_ASS_HOSTID_OFFS 2
> +#define MSS_RXQ_ASS_HOSTID_MASK 0x3F
> +
> +#define MSS_RXQ_ASS_Q_BASE(q, fq) ((((q) + (fq)) % MSS_RXQ_ASS_PER_REG) \
> + * MSS_RXQ_ASS_PER_OFFS)
> +#define MSS_RXQ_ASS_PQ_BASE(q, fq) ((((q) + (fq)) / MSS_RXQ_ASS_PER_REG) \
> + * MSS_RXQ_ASS_OFFS)
> +#define MSS_RXQ_ASS_REG(q, fq) (MSS_RXQ_ASS_BASE + MSS_RXQ_ASS_PQ_BASE(q, fq))
> +
> +#define MSS_THRESHOLD_STOP 768
> +#define MSS_THRESHOLD_START 1024
> +
>
> /* RX buffer constants */
> #define MVPP2_SKB_SHINFO_SIZE \
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index bc4b8069..19648c4 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -744,6 +744,175 @@ static void *mvpp2_buf_alloc(struct mvpp2_port *port,
> return data;
> }
>
> +/* Routine calculate single queue shares address space */
> +static int mvpp22_calc_shared_addr_space(struct mvpp2_port *port)
> +{
> + /* If number of CPU's greater than number of threads, return last
> + * address space
> + */
> + if (num_active_cpus() >= MVPP2_MAX_THREADS)
> + return MVPP2_MAX_THREADS - 1;
> +
> + return num_active_cpus();
Firstly - this can be written as:
return min(num_active_cpus(), MVPP2_MAX_THREADS - 1);
Secondly - what if the number of active CPUs change, for example due
to hotplug activity. What if we boot with maxcpus=1 and then bring the
other CPUs online after networking has been started? The number of
active CPUs is dynamically managed via the scheduler as CPUs are
brought online or offline.
> +/* Routine enable flow control for RXQs conditon */
> +void mvpp2_rxq_enable_fc(struct mvpp2_port *port)
...
> +/* Routine disable flow control for RXQs conditon */
> +void mvpp2_rxq_disable_fc(struct mvpp2_port *port)
Nothing seems to call these in this patch, so on its own, it's not
obvious how these are being called, and therefore what remedy to
suggest for num_active_cpus().
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
Powered by blists - more mailing lists