[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YelmFWn7ot0iQCYG@TonyMac-Alibaba>
Date: Thu, 20 Jan 2022 21:39:33 +0800
From: Tony Lu <tonylu@...ux.alibaba.com>
To: Karsten Graul <kgraul@...ux.ibm.com>
Cc: "D. Wythe" <alibuda@...ux.alibaba.com>, dust.li@...ux.alibaba.com,
kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock
listen queue
On Thu, Jan 13, 2022 at 09:07:51AM +0100, Karsten Graul wrote:
> On 06/01/2022 08:05, Tony Lu wrote:
>
> I think of the following approach: the default maximum of active workers in a
> work queue is defined by WQ_MAX_ACTIVE (512). when this limit is hit then we
> have slightly lesser than 512 parallel SMC handshakes running at the moment,
> and new workers would be enqueued without to become active.
> In that case (max active workers reached) I would tend to fallback new connections
> to TCP. We would end up with lesser connections using SMC, but for the user space
> applications there would be nearly no change compared to TCP (no dropped TCP connection
> attempts, no need to reconnect).
> Imho, most users will never run into this problem, so I think its fine to behave like this.
This makes sense to me, thanks.
>
> As far as I understand you, you still see a good reason in having another behavior
> implemented in parallel (controllable by user) which enqueues all incoming connections
> like in your patch proposal? But how to deal with the out-of-memory problems that might
> happen with that?
There is a possible scene, when the user only wants to use SMC protocol, such
as performance benchmark, or explicitly specify SMC protocol, they can
afford the lower speed of incoming connection creation, but enjoy the
higher QPS after creation.
> Lets decide that when you have a specific control that you want to implement.
> I want to have a very good to introduce another interface into the SMC module,
> making the code more complex and all of that. The decision for the netlink interface
> was also done because we have the impression that this is the NEW way to go, and
> since we had no interface before we started with the most modern way to implement it.
>
> TCP et al have a history with sysfs, so thats why it is still there.
> But I might be wrong on that...
Thanks for the information that I don't know about the decision for new
control interface. I am understanding your decision about the interface.
We are glad to contribute the knobs to smc_netlink.c in the next patches.
There is something I want to discuss here about the persistent
configuration, we need to store new config in system, and make sure that
it could be loaded correctly after boot up. A possible solution is to
extend smc-tools for new config, and work with systemd for auto-loading.
If it works, we are glad to contribute these to smc-tools.
Thank you.
Tony Lu
Powered by blists - more mailing lists