lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 8 Sep 2023 11:07:36 +0200
From: Alexandra Winter <wintera@...ux.ibm.com>
To: "D. Wythe" <alibuda@...ux.alibaba.com>, kgraul@...ux.ibm.com,
        wenjia@...ux.ibm.com, jaka@...ux.ibm.com
Cc: kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
        linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [RFC net-next 0/2] Optimize the parallelism of SMC-R connections



On 06.09.23 15:55, D. Wythe wrote:
> From: "D. Wythe" <alibuda@...ux.alibaba.com>
> 
> This patchset attempts to optimize the parallelism of SMC-R connections
> in quite a SIMPLE way, reduce unnecessary blocking on locks.
> 
> According to Off-CPU statistics, SMC worker's off-CPU statistics
> as that: 
> 
> smc_listen_work 			(48.17%)
> 	__mutex_lock.isra.11 		(47.96%)
> 
> An ideal SMC-R connection process should only block on the IO events
> of the network, but it's quite clear that the SMC-R connection now is
> queued on the lock most of the time.
> 
> Before creating a connection, we always try to see if it can be
> successfully created without allowing the creation of an lgr,
> if so, it means it does not rely on new link group.
> In other words, locking on xxx_lgr_pending is not necessary
> any more.
> 
> Noted that removing this lock will not have an immediate effect
> in the current version, as there are still some concurrency issues
> in the SMC handshake phase. However, regardless, removing this lock
> is a prerequisite for other optimizations.
> 
> If you have any questions or suggestions, please let me know.
> 
> D. Wythe (2):
>   net/smc: refactoring lgr pending lock
>   net/smc: remove locks smc_client_lgr_pending and
>     smc_server_lgr_pending
> 
>  net/smc/af_smc.c   | 24 ++++++++++++------------
>  net/smc/smc_clc.h  |  1 +
>  net/smc/smc_core.c | 28 ++++++++++++++++++++++++++--
>  net/smc/smc_core.h | 21 +++++++++++++++++++++
>  4 files changed, 60 insertions(+), 14 deletions(-)
> 


I have to admit that locking in SMC is quite confusing to me, so this is just my thougths.

Your proposal seems to make things even more complex.

I understand the goal to optimize parallelism.
Today we have the global smc_server/client_lgr_pending AND smc_lgr_list.lock (and more).
There seems to be some overlpa in scope..
Maybe there is some way to reduce the length of the locked paths?
Or use other mechanisms than the big fat smc_server/client_lgr_pending mutex?
e.g.
If you think you can unlock after __smc_conn_create in the re-use-existing_LGR case,
why is the lock needed until after smc_clc_send_confirm in the new-LGR case??


Your use of storing the global lock per ini and then double-freeing it sometimes,
seems a bit homebrewed, though.
E.g. I'm afraid the existing lock checking algorithms could not verify this pattern.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ