[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201014174716.44b4fca3@kicinski-fedora-PC1C0HJN.hsd1.ca.comcast.net>
Date: Wed, 14 Oct 2020 17:47:16 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Hoang Huu Le <hoang.h.le@...tech.com.au>
Cc: tipc-discussion@...ts.sourceforge.net, jmaloy@...hat.com,
maloy@...jonn.com, ying.xue@...driver.com, netdev@...r.kernel.org
Subject: Re: [net] tipc: re-configure queue limit for broadcast link
On Tue, 13 Oct 2020 13:18:10 +0700 Hoang Huu Le wrote:
> The queue limit of the broadcast link is being calculated base on initial
> MTU. However, when MTU value changed (e.g manual changing MTU on NIC
> device, MTU negotiation etc.,) we do not re-calculate queue limit.
> This gives throughput does not reflect with the change.
>
> So fix it by calling the function to re-calculate queue limit of the
> broadcast link.
>
> Acked-by: Jon Maloy <jmaloy@...hat.com>
> Signed-off-by: Hoang Huu Le <hoang.h.le@...tech.com.au>
> ---
> net/tipc/bcast.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> index 940d176e0e87..c77fd13e2777 100644
> --- a/net/tipc/bcast.c
> +++ b/net/tipc/bcast.c
> @@ -108,6 +108,7 @@ static void tipc_bcbase_select_primary(struct net *net)
> {
> struct tipc_bc_base *bb = tipc_bc_base(net);
> int all_dests = tipc_link_bc_peers(bb->link);
> + int max_win = tipc_link_max_win(bb->link);
> int i, mtu, prim;
>
> bb->primary_bearer = INVALID_BEARER_ID;
> @@ -121,8 +122,11 @@ static void tipc_bcbase_select_primary(struct net *net)
> continue;
>
> mtu = tipc_bearer_mtu(net, i);
> - if (mtu < tipc_link_mtu(bb->link))
> + if (mtu < tipc_link_mtu(bb->link)) {
> tipc_link_set_mtu(bb->link, mtu);
> + tipc_link_set_queue_limits(bb->link, max_win,
> + max_win);
Is max/max okay here? Other places seem to use BCLINK_WIN_MIN.
> + }
> bb->bcast_support &= tipc_bearer_bcast_support(net, i);
> if (bb->dests[i] < all_dests)
> continue;
Powered by blists - more mailing lists