[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89653286-f05e-1fc1-b6bf-265b7ecaad0d@suse.com>
Date: Mon, 27 Mar 2023 17:38:44 +0200
From: Jan Beulich <jbeulich@...e.com>
To: Juergen Gross <jgross@...e.com>
Cc: Wei Liu <wei.liu@...nel.org>, Paul Durrant <paul@....org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
xen-devel@...ts.xenproject.org, stable@...r.kernel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 1/2] xen/netback: don't do grant copy across page boundary
On 27.03.2023 12:07, Juergen Gross wrote:
> On 27.03.23 11:49, Jan Beulich wrote:
>> On 27.03.2023 10:36, Juergen Gross wrote:
>>> @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>> cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
>>> - data_len);
>>>
>>> + /* Don't cross local page boundary! */
>>> + if (cop->dest.offset + amount > XEN_PAGE_SIZE) {
>>> + amount = XEN_PAGE_SIZE - cop->dest.offset;
>>> + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb);
>>
>> Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this
>> shift won't grow too large a shift count. The number of slots accepted
>> could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e.
>> XEN_NETIF_NR_SLOTS_MIN) at some point.
>
> This is basically impossible due to the size restriction of struct
> xenvif_tx_cb.
If its size became a problem, it might simply take a level of indirection
to overcome the limitation.
>>> @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>> pending_idx = queue->pending_ring[index];
>>> callback_param(queue, pending_idx).ctx = NULL;
>>> copy_pending_idx(skb, copy_count(skb)) = pending_idx;
>>> - copy_count(skb)++;
>>> + if (!split)
>>> + copy_count(skb)++;
>>>
>>> cop++;
>>> data_len -= amount;
>>> @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>> nr_slots--;
>>> } else {
>>> /* The copy op partially covered the tx_request.
>>> - * The remainder will be mapped.
>>> + * The remainder will be mapped or copied in the next
>>> + * iteration.
>>> */
>>> txp->offset += amount;
>>> txp->size -= amount;
>>> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>>> pending_idx = copy_pending_idx(skb, i);
>>>
>>> newerr = (*gopp_copy)->status;
>>> +
>>> + /* Split copies need to be handled together. */
>>> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) {
>>> + (*gopp_copy)++;
>>> + if (!newerr)
>>> + newerr = (*gopp_copy)->status;
>>> + }
>>
>> It isn't guaranteed that a slot may be split only once, is it? Assuming a
>
> I think it is guaranteed.
>
> No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being
> restricted to that size. There is no way how such a data packet could cross
> 2 page boundaries.
>
> In the end the problem isn't the copies for the linear area not crossing
> multiple page boundaries, but the copies for a single request slot not
> doing so. And this can't happen IMO.
You're thinking of only well-formed requests. What about said request
providing a large size with only tiny fragments? xenvif_get_requests()
will happily process such, creating bogus grant-copy ops. But them failing
once submitted to Xen will be only after damage may already have occurred
(from bogus updates of internal state; the logic altogether is too
involved for me to be convinced that nothing bad can happen).
Interestingly (as I realize now) the shifts you add are not be at risk of
turning UB in this case, as the shift count won't go beyond 16.
>> near-64k packet with all tiny non-primary slots, that'll cause those tiny
>> slots to all be mapped, but due to
>>
>> if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
>> data_len = txreq.size;
>>
>> will, afaict, cause a lot of copying for the primary slot. Therefore I
>> think you need a loop here, not just an if(). Plus tx_copy_ops[]'es
>> dimension also looks to need further growing to accommodate this. Or
>> maybe not - at least the extreme example given would still be fine; more
>> generally packets being limited to below 64k means 2*16 slots would
>> suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the
>> other end (all tiny, including the primary slot). What I haven't fully
>> convinced myself of is whether there might be cases in the middle which
>> are yet worse.
>
> See above reasoning. I think it is okay, but maybe I'm missing something.
Well, the main thing I'm missing is a "primary request fits in a page"
check, even more so with the new copying logic that the commit referenced
by Fixes: introduced into xenvif_get_requests().
Jan
Powered by blists - more mailing lists