lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87edqraa4a.fsf@nvidia.com>
Date:   Tue, 14 Feb 2023 19:19:33 -0800
From:   Rahul Rameshbabu <rrameshbabu@...dia.com>
To:     Vadim Fedorenko <vadfed@...a.com>
Cc:     Saeed Mahameed <saeedm@...dia.com>,
        Saeed Mahameed <saeed@...nel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Tariq Toukan <tariqt@...dia.com>
Subject: Re: [pull request][net 00/10] mlx5 fixes 2023-02-07

Sorry for the late response. Needed a bit of time to wrap my head around
the patch.

On Thu, 09 Feb, 2023 01:10:50 +0000 Vadim Fedorenko <vadfed@...a.com> wrote:
> On 08/02/2023 23:52, Rahul Rameshbabu wrote:
>> On Wed, 08 Feb, 2023 21:36:52 +0000 Vadim Fedorenko <vadfed@...a.com> wrote:
>>> On 08/02/2023 21:16, Rahul Rameshbabu wrote:
>>>> On Wed, 08 Feb, 2023 12:52:55 -0800 Saeed Mahameed <saeedm@...dia.com> wrote:
>>>>> Hi Vadim,
>>>>>
>>>>> We have some new findings internally and Rahul is testing your patches,
>>>>> he found some issues where the patches don't handle the case where only drops are happening, meanings no OOO.
>>>>>
>>>>> Rahul can share more details, he's still working on this and I believe we will have a fully detailed follow-up by the end of the week.
>>>> One thing I noticed was the conditional in mlx5e_ptp_ts_cqe_ooo in v5
>>>> does handle OOO but considers the monotomically increasing case of 1,3,4
>>>> for example to be OOO as well (a resync does not occur when I tested
>>>> this case).
>>>> A simple patch I made to verify this.
>>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
>>>> b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
>>>> index ae75e230170b..dfa5c53bd0d5 100644
>>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
>>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
>>>> @@ -125,6 +125,8 @@ static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq,
>>>>    	struct sk_buff *skb;
>>>>    	ktime_t hwtstamp;
>>>>    +	pr_info("wqe_counter value: %u\n", skb_id);
>>>> +
>>>>    	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
>>>>    		skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo);
>>>>    		ptpsq->cq_stats->err_cqe++;
>>>> @@ -133,6 +135,7 @@ static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq,
>>>>      	if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_cc, skb_id)) {
>>>>    		if (mlx5e_ptp_ts_cqe_ooo(ptpsq, skb_id)) {
>>>> +			pr_info("Marked ooo wqe_counter: %u\n", skb_id);
>>>>    			/* already handled by a previous resync */
>>>>    			ptpsq->cq_stats->ooo_cqe_drop++;
>>>>    			return;
>>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
>>>> index f7897ddb29c5..8582f0535e21 100644
>>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
>>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
>>>> @@ -646,7 +646,7 @@ static void mlx5e_cqe_ts_id_eseg(struct mlx5e_ptpsq *ptpsq, struct sk_buff *skb,
>>>>    				 struct mlx5_wqe_eth_seg *eseg)
>>>>    {
>>>>    	if (ptpsq->ts_cqe_ctr_mask && unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
>>>> -		eseg->flow_table_metadata = cpu_to_be32(ptpsq->skb_fifo_pc &
>>>> +		eseg->flow_table_metadata = cpu_to_be32((ptpsq->skb_fifo_pc * 2) &
>>>>    							ptpsq->ts_cqe_ctr_mask);
>>>>    }
>>>>    Basically, I multiply the wqe_counter written in the WQE by two. The
>>>> thing here is we have a situation where we have "lost" a CQE with
>>>> wqe_counter index of one, but the patch treats that as OOO, which
>>>> basically disables our normal resiliency path for resyncs on drops. At
>>>> that point, the patch could just remove the resync logic altogether when
>>>> a drop is detected.
>>>> What I noticed then was that the case of 0,2 was marked as OOO even
>>>> though out of order would be something like 0,2,1.
>>>>     [Feb 8 02:40] wqe_counter value: 0
>>>>     [ +24.199404] wqe_counter value: 2
>>>>     [  +0.001041] Marked ooo wqe_counter: 2
>>>> I acknowledge the OOO issue but not sure the patch as is, correctly
>>>> solves the issue.
>>>>
>>>
>>> With this patch it's not clear how many skbs were in the queue. AFAIU if there
>>> was only skb id = 1 in the queue, then the id = 2 is definitely OOO because it
>>> couldn't be found in the queue. Otherwise resync should be triggered and that is
>>> what I have seen in our setup with v5 patches.
>> 
>> With this patch at the time of testing, the pc is only 2 because we
>> skipped generating a WQE with a wqe_counter of 1. This matches your
>> expectation that it's OOO since we don't have a pc of 3 (wqe_counter
>> <skb id> 1 was never actually put on the WQ).
>> 
>> One thing I am still concerned about then.
>> 
>>    wqe_counter   0   3   1   2
>>    skb_cc        0   1   2   3
>>    skb_pc        4   4   4   4
>> 
>> Lets say we encounter wqe_counter 3 and the pc is currently 4. OOO is
>> not triggered and we go into the resync logic. The resync logic then
>> consumers 3, 1, and 2 out of order which is still an issue?
>
> Resync logic will drop 1 and 2. The 3 will be consumed and the logic 
> will wait for 4 as the next one. And in this case it's OK to count 1 and 
> 2 as OOO because both of them have arrived after 3. I have to mention 
> that I didn't implement "resync logic". It was implemented before as 
> there should never be OOO cqes according to what was stated in the 
> previous versions of patches by reviewers. My patches do not change 
> logic, they just fix the implementation which is currently crashes the 
> kernel. Once the root cause in FW (which is completely closed source and 
> I can only guess what logic is implemented in it) is found we can 
> re-think the logic. But for now I just want to fix the easy reproducible 
> crash, even if the patch is "bandage".
>

Agree with this explanation and tested the behavior just to confirm.
Also, tested a couple other cases and logically understand how this
patch can make use of the resync logic to drop out-of-order skbs using
the resync logic and then ignore the corresponding CQEs with the OOO
check.

>> 
>>>
>>>
>>>>>
>>>>> Sorry for the late update but these new findings are only from yesterday.
>>>>>
>>>>> Thanks,
>>>>> Saeed.
>>>>>
>>>>>    -------------------------------------------------------------------------------------------------------------------------
>>>>> From: Vadim Fedorenko <vadfed@...a.com>
>>>>> Sent: Wednesday, February 8, 2023 4:40 AM
>>>>> To: Saeed Mahameed <saeed@...nel.org>; Jakub Kicinski <kuba@...nel.org>
>>>>> Cc: Saeed Mahameed <saeedm@...dia.com>; netdev@...r.kernel.org <netdev@...r.kernel.org>; Tariq Toukan <tariqt@...dia.com>
>>>>> Subject: Re: [pull request][net 00/10] mlx5 fixes 2023-02-07
>>>>>    On 08/02/2023 03:02, Saeed Mahameed wrote:
>>>>>> From: Saeed Mahameed <saeedm@...dia.com>
>>>>>>
>>>>>> This series provides bug fixes to mlx5 driver.
>>>>>> Please pull and let me know if there is any problem.
>>>>>>
>>>>> Still no patches for PTP queue? That's a bit wierd.
>>>>> Do you think that they are not ready to be in -net?
>>>> -- Rahul Rameshbabu

Acked-by: Rahul Rameshbabu <rrameshbabu@...dia.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ