lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58331942.1010009@gmail.com>
Date:   Mon, 21 Nov 2016 07:56:50 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     Daniel Borkmann <daniel@...earbox.net>, eric.dumazet@...il.com,
        mst@...hat.com, kubakici@...pl, shm@...ulusnetworks.com,
        davem@...emloft.net, alexei.starovoitov@...il.com
Cc:     netdev@...r.kernel.org, bblanco@...mgrid.com,
        john.r.fastabend@...el.com, brouer@...hat.com, tgraf@...g.ch
Subject: Re: [net-next PATCH v2 4/5] virtio_net: add dedicated XDP transmit
 queues

On 16-11-21 03:45 AM, Daniel Borkmann wrote:
> On 11/20/2016 03:51 AM, John Fastabend wrote:
>> XDP requires using isolated transmit queues to avoid interference
>> with normal networking stack (BQL, NETDEV_TX_BUSY, etc). This patch
>> adds a XDP queue per cpu when a XDP program is loaded and does not
>> expose the queues to the OS via the normal API call to
>> netif_set_real_num_tx_queues(). This way the stack will never push
>> an skb to these queues.
>>
>> However virtio/vhost/qemu implementation only allows for creating
>> TX/RX queue pairs at this time so creating only TX queues was not
>> possible. And because the associated RX queues are being created I
>> went ahead and exposed these to the stack and let the backend use
>> them. This creates more RX queues visible to the network stack than
>> TX queues which is worth mentioning but does not cause any issues as
>> far as I can tell.
>>
>> Signed-off-by: John Fastabend <john.r.fastabend@...el.com>
>> ---

[...]

>>       }
>>
>> +    curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
>> +    if (prog)
>> +        xdp_qp = nr_cpu_ids;
>> +
>> +    /* XDP requires extra queues for XDP_TX */
>> +    if (curr_qp + xdp_qp > vi->max_queue_pairs) {
>> +        netdev_warn(dev, "request %i queues but max is %i\n",
>> +                curr_qp + xdp_qp, vi->max_queue_pairs);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    err = virtnet_set_queues(vi, curr_qp + xdp_qp);
>> +    if (err) {
>> +        dev_warn(&dev->dev, "XDP Device queue allocation failure.\n");
>> +        return err;
>> +    }
>> +
>>       if (prog) {
>> -        prog = bpf_prog_add(prog, vi->max_queue_pairs - 1);
>> -        if (IS_ERR(prog))
>> +        prog = bpf_prog_add(prog, vi->max_queue_pairs);
> 
> I think this change is not correct, it would be off by one now.
> The previous 'vi->max_queue_pairs - 1' was actually correct here.
> dev_change_xdp_fd() already gives you a reference (see the doc on
> enum xdp_netdev_command in netdevice.h).


Right, this was an error thanks for checking it I'll send a v3. And
maybe draft a test for XDP ref counting to test it in the future.

.John


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ