[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9e8776e9-978f-a17f-5756-a9a08c88ea51@mellanox.com>
Date: Sun, 31 May 2020 10:03:44 +0300
From: Oz Shlomo <ozsh@...lanox.com>
To: wenxu <wenxu@...oud.cn>, Paul Blakey <paulb@...lanox.com>,
Roi Dayan <roid@...lanox.com>
Cc: Saeed Mahameed <saeedm@...lanox.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: The size of ct offoad mlx5_flow_table in mlx5e driver
Hi Wenxu,
I'll reply on behalf of Paul
On 5/28/2020 7:02 AM, wenxu wrote:
> Hi Paul,
>
>
> I have a question about the size of ct and ct nat flow table.
>
>
> There are two global mlx5_flow_table tables ct and ct_nat for act_ct offload.
>
>
> The ct and ct_nat flow table create through mlx5_esw_chains_create_global_table
>
> and get the size through mlx5_esw_chains_get_avail_sz_from_pool(esw, POOL_NEXT_SIZE);
>
>
> Firmware currently has 4 pool of 4 sizes that it supports (ESW_POOLS),
>
> and a virtual memory region of 16M (ESW_SIZE). It allocates up to 16M of each pool.
>
>
> ESW_POOLS[] = { 4 * 1024 * 1024,
> 1 * 1024 * 1024,
> 64 * 1024,
> 128 };
>
> So it means the biggest flow table size is 4M. The ct and ct_nat flowtable create in advance,
>
> The size of ct and ct_nat is 4M.
>
> It means there are almost 4M conntrack entry offload to the hardware?
Yes, the conntrack table has 4M entries.
>
> The flow table map is fixed in the FW? And the size can be changed to 8M through the following?
>
> ESW_POOLS[] = { 8 * 1024 * 1024,
> 1 * 1024 * 1024,
> 64 * 1024,
> 128 };
The size cannot be increased due to internal FW limitations.
We are currently working on an alternative design for increased scalability.
>
>
> BR
>
> wenxu
>
>
>
>
>
>
>
>
>
Powered by blists - more mailing lists