lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Jul 2019 22:03:26 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Jason Wang <jasowang@...hat.com>,
        Yuya Kusakabe <yuya.kusakabe@...il.com>
Cc:     ast@...nel.org, davem@...emloft.net, hawk@...nel.org,
        jakub.kicinski@...ronome.com, john.fastabend@...il.com,
        kafai@...com, mst@...hat.com, netdev@...r.kernel.org,
        songliubraving@...com, yhs@...com
Subject: Re: [PATCH bpf-next v3] virtio_net: add XDP meta data support

On 07/09/2019 05:04 AM, Jason Wang wrote:
> On 2019/7/9 上午6:38, Daniel Borkmann wrote:
>> On 07/02/2019 04:11 PM, Yuya Kusakabe wrote:
>>> On 7/2/19 5:33 PM, Jason Wang wrote:
>>>> On 2019/7/2 下午4:16, Yuya Kusakabe wrote:
>>>>> This adds XDP meta data support to both receive_small() and
>>>>> receive_mergeable().
>>>>>
>>>>> Fixes: de8f3a83b0a0 ("bpf: add meta pointer for direct access")
>>>>> Signed-off-by: Yuya Kusakabe <yuya.kusakabe@...il.com>
>>>>> ---
>>>>> v3:
>>>>>    - fix preserve the vnet header in receive_small().
>>>>> v2:
>>>>>    - keep copy untouched in page_to_skb().
>>>>>    - preserve the vnet header in receive_small().
>>>>>    - fix indentation.
>>>>> ---
>>>>>    drivers/net/virtio_net.c | 45 +++++++++++++++++++++++++++-------------
>>>>>    1 file changed, 31 insertions(+), 14 deletions(-)
>>>>>
>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>>> index 4f3de0ac8b0b..03a1ae6fe267 100644
>>>>> --- a/drivers/net/virtio_net.c
>>>>> +++ b/drivers/net/virtio_net.c
>>>>> @@ -371,7 +371,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
>>>>>                       struct receive_queue *rq,
>>>>>                       struct page *page, unsigned int offset,
>>>>>                       unsigned int len, unsigned int truesize,
>>>>> -                   bool hdr_valid)
>>>>> +                   bool hdr_valid, unsigned int metasize)
>>>>>    {
>>>>>        struct sk_buff *skb;
>>>>>        struct virtio_net_hdr_mrg_rxbuf *hdr;
>>>>> @@ -393,7 +393,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
>>>>>        else
>>>>>            hdr_padded_len = sizeof(struct padded_vnet_hdr);
>>>>>    -    if (hdr_valid)
>>>>> +    if (hdr_valid && !metasize)
>>>>>            memcpy(hdr, p, hdr_len);
>>>>>          len -= hdr_len;
>>>>> @@ -405,6 +405,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
>>>>>            copy = skb_tailroom(skb);
>>>>>        skb_put_data(skb, p, copy);
>>>>>    +    if (metasize) {
>>>>> +        __skb_pull(skb, metasize);
>>>>> +        skb_metadata_set(skb, metasize);
>>>>> +    }
>>>>> +
>>>>>        len -= copy;
>>>>>        offset += copy;
>>>>>    @@ -644,6 +649,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
>>>>>        unsigned int delta = 0;
>>>>>        struct page *xdp_page;
>>>>>        int err;
>>>>> +    unsigned int metasize = 0;
>>>>>          len -= vi->hdr_len;
>>>>>        stats->bytes += len;
>>>>> @@ -683,10 +689,13 @@ static struct sk_buff *receive_small(struct net_device *dev,
>>>>>              xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
>>>>>            xdp.data = xdp.data_hard_start + xdp_headroom;
>>>>> -        xdp_set_data_meta_invalid(&xdp);
>>>>>            xdp.data_end = xdp.data + len;
>>>>> +        xdp.data_meta = xdp.data;
>>>>>            xdp.rxq = &rq->xdp_rxq;
>>>>>            orig_data = xdp.data;
>>>>> +        /* Copy the vnet header to the front of data_hard_start to avoid
>>>>> +         * overwriting by XDP meta data */
>>>>> +        memcpy(xdp.data_hard_start - vi->hdr_len, xdp.data - vi->hdr_len, vi->hdr_len);
>> I'm not fully sure if I'm following this one correctly, probably just missing
>> something. Isn't the vnet header based on how we set up xdp.data_hard_start
>> earlier already in front of it? Wouldn't we copy invalid data from xdp.data -
>> vi->hdr_len into the vnet header at that point (given there can be up to 256
>> bytes of headroom between the two)? If it's relative to xdp.data and headroom
>> is >0, then BPF prog could otherwise mangle this; something doesn't add up to
>> me here. Could you clarify? Thx
> 
> Vnet headr sits just in front of xdp.data not xdp.data_hard_start. So it could be overwrote by metadata, that's why we need a copy here.

For the current code, you can adjust the xdp.data with a positive/negative offset
already via bpf_xdp_adjust_head() helper. If vnet headr sits just in front of
xdp.data, couldn't this be overridden today as well then? Anyway, just wondering
how this is handled differently?

Thanks,
Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ