[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4f381e9-6698-3339-1d17-15e3abc71d06@suse.com>
Date: Thu, 16 Sep 2021 17:45:56 +0200
From: Jan Beulich <jbeulich@...e.com>
To: paul@....org, Wei Liu <wl@....org>
Cc: "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: PingĀ²: [PATCH] xen-netback: correct success/error reporting for the SKB-with-fraglist case
On 15.07.2021 10:58, Jan Beulich wrote:
> On 20.05.2021 13:46, Jan Beulich wrote:
>> On 25.02.2021 17:23, Paul Durrant wrote:
>>> On 25/02/2021 14:00, Jan Beulich wrote:
>>>> On 25.02.2021 13:11, Paul Durrant wrote:
>>>>> On 25/02/2021 07:33, Jan Beulich wrote:
>>>>>> On 24.02.2021 17:39, Paul Durrant wrote:
>>>>>>> On 23/02/2021 16:29, Jan Beulich wrote:
>>>>>>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>>>>>>> special considerations for the head of the SKB no longer apply. Don't
>>>>>>>> mistakenly report ERROR to the frontend for the first entry in the list,
>>>>>>>> even if - from all I can tell - this shouldn't matter much as the overall
>>>>>>>> transmit will need to be considered failed anyway.
>>>>>>>>
>>>>>>>> Signed-off-by: Jan Beulich <jbeulich@...e.com>
>>>>>>>>
>>>>>>>> --- a/drivers/net/xen-netback/netback.c
>>>>>>>> +++ b/drivers/net/xen-netback/netback.c
>>>>>>>> @@ -499,7 +499,7 @@ check_frags:
>>>>>>>> * the header's copy failed, and they are
>>>>>>>> * sharing a slot, send an error
>>>>>>>> */
>>>>>>>> - if (i == 0 && sharedslot)
>>>>>>>> + if (i == 0 && !first_shinfo && sharedslot)
>>>>>>>> xenvif_idx_release(queue, pending_idx,
>>>>>>>> XEN_NETIF_RSP_ERROR);
>>>>>>>> else
>>>>>>>>
>>>>>>>
>>>>>>> I think this will DTRT, but to my mind it would make more sense to clear
>>>>>>> 'sharedslot' before the 'goto check_frags' at the bottom of the function.
>>>>>>
>>>>>> That was my initial idea as well, but
>>>>>> - I think it is for a reason that the variable is "const".
>>>>>> - There is another use of it which would then instead need further
>>>>>> amending (and which I believe is at least part of the reason for
>>>>>> the variable to be "const").
>>>>>>
>>>>>
>>>>> Oh, yes. But now that I look again, don't you want:
>>>>>
>>>>> if (i == 0 && first_shinfo && sharedslot)
>>>>>
>>>>> ? (i.e no '!')
>>>>>
>>>>> The comment states that the error should be indicated when the first
>>>>> frag contains the header in the case that the map succeeded but the
>>>>> prior copy from the same ref failed. This can only possibly be the case
>>>>> if this is the 'first_shinfo'
>>>>
>>>> I don't think so, no - there's a difference between "first frag"
>>>> (at which point first_shinfo is NULL) and first frag list entry
>>>> (at which point first_shinfo is non-NULL).
>>>
>>> Yes, I realise I got it backwards. Confusing name but the comment above
>>> its declaration does explain.
>>>
>>>>
>>>>> (which is why I still think it is safe to unconst 'sharedslot' and
>>>>> clear it).
>>>>
>>>> And "no" here as well - this piece of code
>>>>
>>>> /* First error: if the header haven't shared a slot with the
>>>> * first frag, release it as well.
>>>> */
>>>> if (!sharedslot)
>>>> xenvif_idx_release(queue,
>>>> XENVIF_TX_CB(skb)->pending_idx,
>>>> XEN_NETIF_RSP_OKAY);
>>>>
>>>> specifically requires sharedslot to have the value that was
>>>> assigned to it at the start of the function (this property
>>>> doesn't go away when switching from fragments to frag list).
>>>> Note also how it uses XENVIF_TX_CB(skb)->pending_idx, i.e. the
>>>> value the local variable pending_idx was set from at the start
>>>> of the function.
>>>>
>>>
>>> True, we do have to deal with freeing up the header if the first map
>>> error comes on the frag list.
>>>
>>> Reviewed-by: Paul Durrant <paul@....org>
>>
>> Since I've not seen this go into 5.13-rc, may I ask what the disposition
>> of this is?
>
> I can't seem to spot this in 5.14-rc either. I have to admit I'm
> increasingly puzzled ...
Another two months (and another release) later and still nothing. Am
I doing something wrong? Am I wrongly assuming that maintainers would
push such changes up the chain?
Jan
Powered by blists - more mailing lists