[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <VI1PR05MB4605F8A613096438E987FC10F1020@VI1PR05MB4605.eurprd05.prod.outlook.com>
Date: Thu, 15 Oct 2020 02:25:11 +0000
From: Hoang Huu Le <hoang.h.le@...tech.com.au>
To: Jakub Kicinski <kuba@...nel.org>
CC: "tipc-discussion@...ts.sourceforge.net"
<tipc-discussion@...ts.sourceforge.net>,
"jmaloy@...hat.com" <jmaloy@...hat.com>,
"maloy@...jonn.com" <maloy@...jonn.com>,
"ying.xue@...driver.com" <ying.xue@...driver.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [net] tipc: re-configure queue limit for broadcast link
Thanks for your reviewing.
Yes, in this commit, we intend to fix the queue calculation limited, and,
besides we're planning to fix both in another fix. However, it should be used the default (i.e BCLINK_WIN_DEFAULT) one.
Since, we keep to choose fix window size for broadcast link.
Regards,
Hoang
> -----Original Message-----
> From: Jakub Kicinski <kuba@...nel.org>
> Sent: Thursday, October 15, 2020 7:47 AM
> To: Hoang Huu Le <hoang.h.le@...tech.com.au>
> Cc: tipc-discussion@...ts.sourceforge.net; jmaloy@...hat.com; maloy@...jonn.com; ying.xue@...driver.com;
> netdev@...r.kernel.org
> Subject: Re: [net] tipc: re-configure queue limit for broadcast link
>
> On Tue, 13 Oct 2020 13:18:10 +0700 Hoang Huu Le wrote:
> > The queue limit of the broadcast link is being calculated base on initial
> > MTU. However, when MTU value changed (e.g manual changing MTU on NIC
> > device, MTU negotiation etc.,) we do not re-calculate queue limit.
> > This gives throughput does not reflect with the change.
> >
> > So fix it by calling the function to re-calculate queue limit of the
> > broadcast link.
> >
> > Acked-by: Jon Maloy <jmaloy@...hat.com>
> > Signed-off-by: Hoang Huu Le <hoang.h.le@...tech.com.au>
> > ---
> > net/tipc/bcast.c | 6 +++++-
> > 1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> > index 940d176e0e87..c77fd13e2777 100644
> > --- a/net/tipc/bcast.c
> > +++ b/net/tipc/bcast.c
> > @@ -108,6 +108,7 @@ static void tipc_bcbase_select_primary(struct net *net)
> > {
> > struct tipc_bc_base *bb = tipc_bc_base(net);
> > int all_dests = tipc_link_bc_peers(bb->link);
> > + int max_win = tipc_link_max_win(bb->link);
> > int i, mtu, prim;
> >
> > bb->primary_bearer = INVALID_BEARER_ID;
> > @@ -121,8 +122,11 @@ static void tipc_bcbase_select_primary(struct net *net)
> > continue;
> >
> > mtu = tipc_bearer_mtu(net, i);
> > - if (mtu < tipc_link_mtu(bb->link))
> > + if (mtu < tipc_link_mtu(bb->link)) {
> > tipc_link_set_mtu(bb->link, mtu);
> > + tipc_link_set_queue_limits(bb->link, max_win,
> > + max_win);
>
> Is max/max okay here? Other places seem to use BCLINK_WIN_MIN.
>
> > + }
> > bb->bcast_support &= tipc_bearer_bcast_support(net, i);
> > if (bb->dests[i] < all_dests)
> > continue;
Powered by blists - more mailing lists