lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 9 May 2020 10:15:27 +0800 From: Jason Wang <jasowang@...hat.com> To: Jesper Dangaard Brouer <brouer@...hat.com>, sameehj@...zon.com Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, Toke Høiland-Jørgensen <toke@...hat.com>, Daniel Borkmann <borkmann@...earbox.net>, Alexei Starovoitov <alexei.starovoitov@...il.com>, "David S. Miller" <davem@...emloft.net>, John Fastabend <john.fastabend@...il.com>, Alexander Duyck <alexander.duyck@...il.com>, Jeff Kirsher <jeffrey.t.kirsher@...el.com>, David Ahern <dsahern@...il.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, Lorenzo Bianconi <lorenzo@...nel.org>, Saeed Mahameed <saeedm@...lanox.com>, Tariq Toukan <tariqt@...lanox.com> Subject: Re: [PATCH net-next v3 21/33] virtio_net: add XDP frame size in two code paths On 2020/5/8 下午7:10, Jesper Dangaard Brouer wrote: > The virtio_net driver is running inside the guest-OS. There are two > XDP receive code-paths in virtio_net, namely receive_small() and > receive_mergeable(). The receive_big() function does not support XDP. > > In receive_small() the frame size is available in buflen. The buffer > backing these frames are allocated in add_recvbuf_small() with same > size, except for the headroom, but tailroom have reserved room for > skb_shared_info. The headroom is encoded in ctx pointer as a value. > > In receive_mergeable() the frame size is more dynamic. There are two > basic cases: (1) buffer size is based on a exponentially weighted > moving average (see DECLARE_EWMA) of packet length. Or (2) in case > virtnet_get_headroom() have any headroom then buffer size is > PAGE_SIZE. The ctx pointer is this time used for encoding two values; > the buffer len "truesize" and headroom. In case (1) if the rx buffer > size is underestimated, the packet will have been split over more > buffers (num_buf info in virtio_net_hdr_mrg_rxbuf placed in top of > buffer area). If that happens the XDP path does a xdp_linearize_page > operation. > > V3: Adjust frame_sz in receive_mergeable() case, spotted by Jason Wang. > > The code is really hard to follow, Yes, I plan to rework to make it more easier for reviewers. > so some hints to reviewers. > The receive_mergeable() case gets frames that were allocated in > add_recvbuf_mergeable() which uses headroom=virtnet_get_headroom(), > and 'buf' ptr is advanced this headroom. The headroom can only > be 0 or VIRTIO_XDP_HEADROOM, as virtnet_get_headroom is really > simple: > > static unsigned int virtnet_get_headroom(struct virtnet_info *vi) > { > return vi->xdp_queue_pairs ? VIRTIO_XDP_HEADROOM : 0; > } > > As frame_sz is an offset size from xdp.data_hard_start, reviewers > should notice how this is calculated in receive_mergeable(): > > int offset = buf - page_address(page); > [...] > data = page_address(xdp_page) + offset; > xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len; > > The calculated offset will always be VIRTIO_XDP_HEADROOM when > reaching this code. Thus, xdp.data_hard_start will be page-start > address plus vi->hdr_len. Given this xdp.frame_sz need to be > reduced with vi->hdr_len size. > > IMHO a followup patch should cleanup this code to make it easier > to maintain and understand, but it is outside the scope of this > patchset. > > Cc: Jason Wang <jasowang@...hat.com> > Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com> > Acked-by: Michael S. Tsirkin <mst@...hat.com> Acked-by: Jason Wang <jasowang@...hat.com> Thanks > --- > drivers/net/virtio_net.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index 11f722460513..9e1b5d748586 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -689,6 +689,7 @@ static struct sk_buff *receive_small(struct net_device *dev, > xdp.data_end = xdp.data + len; > xdp.data_meta = xdp.data; > xdp.rxq = &rq->xdp_rxq; > + xdp.frame_sz = buflen; > orig_data = xdp.data; > act = bpf_prog_run_xdp(xdp_prog, &xdp); > stats->xdp_packets++; > @@ -797,10 +798,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > int offset = buf - page_address(page); > struct sk_buff *head_skb, *curr_skb; > struct bpf_prog *xdp_prog; > - unsigned int truesize; > + unsigned int truesize = mergeable_ctx_to_truesize(ctx); > unsigned int headroom = mergeable_ctx_to_headroom(ctx); > - int err; > unsigned int metasize = 0; > + unsigned int frame_sz; > + int err; > > head_skb = NULL; > stats->bytes += len - vi->hdr_len; > @@ -821,6 +823,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > if (unlikely(hdr->hdr.gso_type)) > goto err_xdp; > > + /* Buffers with headroom use PAGE_SIZE as alloc size, > + * see add_recvbuf_mergeable() + get_mergeable_buf_len() > + */ > + frame_sz = headroom ? PAGE_SIZE : truesize; > + > /* This happens when rx buffer size is underestimated > * or headroom is not enough because of the buffer > * was refilled before XDP is set. This should only > @@ -834,6 +841,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > page, offset, > VIRTIO_XDP_HEADROOM, > &len); > + frame_sz = PAGE_SIZE; > + > if (!xdp_page) > goto err_xdp; > offset = VIRTIO_XDP_HEADROOM; > @@ -850,6 +859,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > xdp.data_end = xdp.data + (len - vi->hdr_len); > xdp.data_meta = xdp.data; > xdp.rxq = &rq->xdp_rxq; > + xdp.frame_sz = frame_sz - vi->hdr_len; > > act = bpf_prog_run_xdp(xdp_prog, &xdp); > stats->xdp_packets++; > @@ -924,7 +934,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, > } > rcu_read_unlock(); > > - truesize = mergeable_ctx_to_truesize(ctx); > if (unlikely(len > truesize)) { > pr_debug("%s: rx error: len %u exceeds truesize %lu\n", > dev->name, len, (unsigned long)ctx); > >
Powered by blists - more mailing lists