lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <169c530c-0c00-4677-979f-4a998e336e0b@gmail.com>
Date: Fri, 1 Nov 2024 21:12:31 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, Mina Almasry <almasrymina@...gle.com>,
 David Wei <dw@...idwei.uk>
Cc: io-uring@...r.kernel.org, netdev@...r.kernel.org,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Jesper Dangaard Brouer <hawk@...nel.org>, David Ahern <dsahern@...nel.org>,
 Stanislav Fomichev <stfomichev@...il.com>, Joe Damato <jdamato@...tly.com>,
 Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [PATCH v7 13/15] io_uring/zcrx: set pp memory provider for an rx
 queue

On 11/1/24 20:35, Jens Axboe wrote:
> On 11/1/24 2:16 PM, Mina Almasry wrote:
>> On Tue, Oct 29, 2024 at 4:06?PM David Wei <dw@...idwei.uk> wrote:
>>>
>>> From: David Wei <davidhwei@...a.com>
>>>
>>> Set the page pool memory provider for the rx queue configured for zero
>>> copy to io_uring. Then the rx queue is reset using
>>> netdev_rx_queue_restart() and netdev core + page pool will take care of
>>> filling the rx queue from the io_uring zero copy memory provider.
>>>
>>> For now, there is only one ifq so its destruction happens implicitly
>>> during io_uring cleanup.
>>>
>>> Signed-off-by: David Wei <dw@...idwei.uk>
>>> ---
>>>   io_uring/zcrx.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++--
>>>   io_uring/zcrx.h |  2 ++
>>>   2 files changed, 86 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
>>> index 477b0d1b7b91..3f4625730dbd 100644
>>> --- a/io_uring/zcrx.c
>>> +++ b/io_uring/zcrx.c
>>> @@ -8,6 +8,7 @@
>>>   #include <net/page_pool/helpers.h>
>>>   #include <net/page_pool/memory_provider.h>
>>>   #include <trace/events/page_pool.h>
>>> +#include <net/netdev_rx_queue.h>
>>>   #include <net/tcp.h>
>>>   #include <net/rps.h>
>>>
>>> @@ -36,6 +37,65 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio
>>>          return container_of(owner, struct io_zcrx_area, nia);
>>>   }
>>>
>>> +static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx)
>>> +{
>>> +       struct netdev_rx_queue *rxq;
>>> +       struct net_device *dev = ifq->dev;
>>> +       int ret;
>>> +
>>> +       ASSERT_RTNL();
>>> +
>>> +       if (ifq_idx >= dev->num_rx_queues)
>>> +               return -EINVAL;
>>> +       ifq_idx = array_index_nospec(ifq_idx, dev->num_rx_queues);
>>> +
>>> +       rxq = __netif_get_rx_queue(ifq->dev, ifq_idx);
>>> +       if (rxq->mp_params.mp_priv)
>>> +               return -EEXIST;
>>> +
>>> +       ifq->if_rxq = ifq_idx;
>>> +       rxq->mp_params.mp_ops = &io_uring_pp_zc_ops;
>>> +       rxq->mp_params.mp_priv = ifq;
>>> +       ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq);
>>> +       if (ret)
>>> +               goto fail;
>>> +       return 0;
>>> +fail:
>>> +       rxq->mp_params.mp_ops = NULL;
>>> +       rxq->mp_params.mp_priv = NULL;
>>> +       ifq->if_rxq = -1;
>>> +       return ret;
>>> +}
>>> +
>>
>> I don't see a CAP_NET_ADMIN check. Likely I missed it. Is that done
>> somewhere? Binding user memory to an rx queue needs to be a privileged
>> operation.
> 
> There's only one caller of this, and it literally has a CAP_NET_ADMIN at
> the very top. Patch 9 adds the registration.

Right, Patch 9/15, checked very early before creating any objects.

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ