lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c3a624a6-68b5-46f3-e9ea-9e2acc65bb90@linux.ibm.com>
Date: Wed, 30 Aug 2023 17:22:06 +0200
From: Wenjia Zhang <wenjia@...ux.ibm.com>
To: Guangguan Wang <guangguan.wang@...ux.alibaba.com>, jaka@...ux.ibm.com,
        kgraul@...ux.ibm.com, tonylu@...ux.alibaba.com, davem@...emloft.net,
        edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com
Cc: horms@...nel.org, alibuda@...ux.alibaba.com, guwen@...ux.alibaba.com,
        linux-s390@...r.kernel.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 net-next 4/6] net/smc: support max connections per
 lgr negotiation



On 30.08.23 05:17, Guangguan Wang wrote:
> 
> 
> On 2023/8/29 21:18, Wenjia Zhang wrote:
>>
>>
>> On 29.08.23 04:31, Guangguan Wang wrote:
>>>
>>>
>>> On 2023/8/28 20:54, Wenjia Zhang wrote:
>>>>
>>>>
>>>> On 15.08.23 08:31, Guangguan Wang wrote:
>>>>>
>>>>>
>>>>> On 2023/8/10 00:04, Wenjia Zhang wrote:
>>>>>>
>>>>>>
>>>>>> On 07.08.23 08:27, Guangguan Wang wrote:
>>>>>>> Support max connections per lgr negotiation for SMCR v2.1,
>>>>>>> which is one of smc v2.1 features.
>>>>> ...
>>>>>>> @@ -472,6 +473,9 @@ int smc_llc_send_confirm_link(struct smc_link *link,
>>>>>>>          confllc->link_num = link->link_id;
>>>>>>>          memcpy(confllc->link_uid, link->link_uid, SMC_LGR_ID_SIZE);
>>>>>>>          confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS;
>>>>>>> +    if (link->lgr->smc_version == SMC_V2 &&
>>>>>>> +        link->lgr->peer_smc_release >= SMC_RELEASE_1)
>>>>>>> +        confllc->max_conns = link->lgr->max_conns;
>>>>>>>          /* send llc message */
>>>>>>>          rc = smc_wr_tx_send(link, pend);
>>>>>>>      put_out:
>>>>>>
>>>>>> Did I miss the negotiation process somewhere for the following scenario?
>>>>>> (Example 4 in the document)
>>>>>> Client                 Server
>>>>>>        Proposal(max conns(16))
>>>>>>        ----------------------->
>>>>>>
>>>>>>        Accept(max conns(32))
>>>>>>        <-----------------------
>>>>>>
>>>>>>        Confirm(max conns(32))
>>>>>>        ----------------------->
>>>>>
>>>>> Did you mean the accepted max conns is different(not 32) from the Example 4 when the proposal max conns is 16?
>>>>>
>>>>> As described in (https://www.ibm.com/support/pages/node/7009315) page 41:
>>>>> ...
>>>>> 2. Max conns and max links values sent in the CLC Proposal are the client preferred values.
>>>>> 3. The v2.1 values sent in the Accept message are the final values. The client must accept the values or
>>>>> DECLINE the connection.
>>>>> 4. Max conns and links values sent in the CLC Accept are the final values (server dictates). The server can
>>>>> either honor the client’s preferred values or return different (negotiated but final) values.
>>>>> ...
>>>>>
>>>>> If I understand correctly, the server dictates the final value of max conns, but how the server dictates the final
>>>>> value of max conns is not defined in SMC v2.1. In this patch, the server use the minimum value of client preferred
>>>>> value and server preferred value as the final value of max conns. The max links is negotiated with the same logic.
>>>>>
>>>>> Client                 Server
>>>>>         Proposal(max conns(client preferred))
>>>>>         ----------------------->
>>>>>           Accept(max conns(accepted value)) accepted value=min(client preferred, server preferred)
>>>>>         <-----------------------
>>>>>           Confirm(max conns(accepted value))
>>>>>         ----------------------->
>>>>>
>>>>> I also will add this description into commit message for better understanding.
>>>>>
>>>>> Thanks,
>>>>> Guangguan Wang
>>>>>
>>>>>
>>>>>
>>>>
>>>> Sorry for the late answer, I'm just back from vacation.
>>>>
>>>> That's true that the protocol does not define how the server decides the final value(s). I'm wondering if there is some reason for you to use the minimum value instead of maximum (corresponding to the examples in the document). If the both prefered values (client's and server's) are in the range of the acceptable value, why not the maximum? Is there any consideration on that?
>>>>
>>>> Best,
>>>> Wenjia
>>>
>>> Since the value of the default preferred max conns is already the maximum value of the range(16-255), I am wondering
>>> whether it makes any sense to use the maximum for decision, where the negotiated result of max conns is always 255.
>>> So does the max links.
>>>
>>> Thanks,
>>> Guangguan
>>
>> I don't think the server's default maxconns must be the maximum value, i.e 255. Since the patches series are already applied, we say the previous implementation uses maximus value because the maxconns is not tunable, so that we choose an appropriate value as the default value.
>> Now the value is negotiable, the default value could be also the server's prefer value.
> If the server's default maxconns could be other value rather than maximum value, it's OK to use other decision algorithm(minimum, maximum or others).
> But it is still a question that how to tune the default maxconns, maybe it is different from different linux distributions and different vendors of rdma nic.
> 
That's true. I think more discussion is needed. Let's talk about it 
offline first, since these patches are already applied.

BTW, thank you for the efforts!

Best,
Wenjia

>> But regarding maxlinks, I'm fine with the minimus, and actually it should be, because it should not be possible to try to add another link if one of the peers can and want to support only one link, i.e. down-level.
> Agree with you.
> 
>> Any opinion?
>>
>> Best,
>> Wenjia
> 
> Thanks,
> Guangguan Wang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ