lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz2bj0YWH5K6OW8m+BC06QZTSYW=xbApuEDK5pRCx+RLAA@mail.gmail.com>
Date:   Mon, 28 Sep 2020 16:25:01 +0200
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     David Ahern <dsahern@...il.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        David Ahern <dahern@...italocean.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        David Ahern <dsahern@...nel.org>,
        Network Development <netdev@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for
 using XDP

On Mon, Sep 28, 2020 at 5:13 AM David Ahern <dsahern@...il.com> wrote:
>
> On 2/27/20 2:41 AM, Magnus Karlsson wrote:
> > I will unfortunately be after Netdevconf due to other commitments. The
> > plan is to send out the RFC to the co-authors of the Plumbers
> > presentation first, just to check the sanity of it. And after that
> > send it to the mailing list. Note that I have taken two shortcuts in
> > the RFC to be able to make quicker progress. The first on is the
> > driver implementation of the dynamic queue allocation and
> > de-allocation. It just does this within a statically pre-allocated set
> > of queues. The second is that the user space interface is just a
> > setsockopt instead of a rtnetlink interface. Again, just to save some
> > time in this initial phase. The information communicated in the
> > interface is the same though. In the current code, the queue manager
> > can handle the queues of the networking stack, the XDP_TX queues and
> > queues allocated by user space and used for AF_XDP. Other uses from
> > user space is not covered due to my setsockopt shortcut. Hopefully
> > though, this should be enough for an initial assessment.
>
> Any updates on the RFC? I do not recall seeing a patch set on the
> mailing list, but maybe I missed it.

No, you have unfortunately not missed anything. It has been lying on
the shelf collecting dust for most of this time. The reason was that
the driver changes needed to support dynamic queue allocation just
became too complex as it would require major surgery to at least all
Intel drivers, and probably a large number of other ones as well. Do
not think any vendor would support such a high effort solution and I
could not (at that time at least) find a way around it. So, gaining
visibility into what queues have been allocated (by all entities in
the kernel that uses queue) seems to be rather straightforward, but
the dynamic allocation part seems to be anything but.

I also wonder how useful this queue manager proposal would be in light
of Mellanox's subfunction proposal. If people just start to create
many small netdevs (albeit at high cost which people may argue
against) consisting of just an rx/tx queue pair, then the queue
manager dynamic allocation proposal would not be as useful. We could
just use one of these netdevs to bind to in the AF_XDP case and always
just specify queue 0. But one can argue that queue management is
needed even for the subfunction approach, but then it would be at a
much lower level than what I proposed. What is your take on this?
Still worth pursuing in some form or another? If yes, then we really
need to come up with an easy way of supporting this in current
drivers. It is not going to fly otherwise, IMHO.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ