lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <591d2e47-edd9-453a-a888-c43ba5b76a1e@linux.ibm.com>
Date:   Thu, 20 Jan 2022 17:00:18 +0100
From:   Stefan Raspl <raspl@...ux.ibm.com>
To:     Tony Lu <tonylu@...ux.alibaba.com>,
        Karsten Graul <kgraul@...ux.ibm.com>
Cc:     "D. Wythe" <alibuda@...ux.alibaba.com>, dust.li@...ux.alibaba.com,
        kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
        linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock
 listen queue

On 1/20/22 14:39, Tony Lu wrote:
> On Thu, Jan 13, 2022 at 09:07:51AM +0100, Karsten Graul wrote:
>> On 06/01/2022 08:05, Tony Lu wrote:
>>
>> I think of the following approach: the default maximum of active workers in a
>> work queue is defined by WQ_MAX_ACTIVE (512). when this limit is hit then we
>> have slightly lesser than 512 parallel SMC handshakes running at the moment,
>> and new workers would be enqueued without to become active.
>> In that case (max active workers reached) I would tend to fallback new connections
>> to TCP. We would end up with lesser connections using SMC, but for the user space
>> applications there would be nearly no change compared to TCP (no dropped TCP connection
>> attempts, no need to reconnect).
>> Imho, most users will never run into this problem, so I think its fine to behave like this.
> 
> This makes sense to me, thanks.
> 
>>
>> As far as I understand you, you still see a good reason in having another behavior
>> implemented in parallel (controllable by user) which enqueues all incoming connections
>> like in your patch proposal? But how to deal with the out-of-memory problems that might
>> happen with that?
> 
> There is a possible scene, when the user only wants to use SMC protocol, such
> as performance benchmark, or explicitly specify SMC protocol, they can
> afford the lower speed of incoming connection creation, but enjoy the
> higher QPS after creation.
> 
>> Lets decide that when you have a specific control that you want to implement.
>> I want to have a very good to introduce another interface into the SMC module,
>> making the code more complex and all of that. The decision for the netlink interface
>> was also done because we have the impression that this is the NEW way to go, and
>> since we had no interface before we started with the most modern way to implement it.
>>
>> TCP et al have a history with sysfs, so thats why it is still there.
>> But I might be wrong on that...
> 
> Thanks for the information that I don't know about the decision for new
> control interface. I am understanding your decision about the interface.
> We are glad to contribute the knobs to smc_netlink.c in the next patches.
> 
> There is something I want to discuss here about the persistent
> configuration, we need to store new config in system, and make sure that
> it could be loaded correctly after boot up. A possible solution is to
> extend smc-tools for new config, and work with systemd for auto-loading.
> If it works, we are glad to contribute these to smc-tools.

I'd be definitely open to look into patches for smc-tools that extend it to 
configure SMC properties, and that provide the capability to read (and apply) a 
config from a file! We can discuss what you'd imagine as an interface before you 
implement it, too.

Ciao,
Stefan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ