[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a814c41a-40f9-4632-a5bb-ad3da5911fb6@redhat.com>
Date: Tue, 25 Feb 2025 14:04:40 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Mina Almasry <almasrymina@...gle.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
virtualization@...ts.linux.dev, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org
Cc: Donald Hunter <donald.hunter@...il.com>, Jakub Kicinski
<kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>, Andrew Lunn <andrew+netdev@...n.ch>,
Jeroen de Borst <jeroendb@...gle.com>,
Harshitha Ramamurthy <hramamurthy@...gle.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>, Willem de Bruijn
<willemb@...gle.com>, David Ahern <dsahern@...nel.org>,
Neal Cardwell <ncardwell@...gle.com>, Stefan Hajnoczi <stefanha@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>, "Michael S. Tsirkin"
<mst@...hat.com>, Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Eugenio PĂ©rez
<eperezma@...hat.com>, Shuah Khan <shuah@...nel.org>, sdf@...ichev.me,
asml.silence@...il.com, dw@...idwei.uk, Jamal Hadi Salim <jhs@...atatu.com>,
Victor Nogueira <victor@...atatu.com>, Pedro Tammela
<pctammela@...atatu.com>, Samiullah Khawaja <skhawaja@...gle.com>,
Kaiyuan Zhang <kaiyuanz@...gle.com>
Subject: Re: [PATCH net-next v5 3/9] net: devmem: Implement TX path
On 2/22/25 8:15 PM, Mina Almasry wrote:
[...]
> @@ -119,6 +122,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
> unsigned long xa_idx;
> unsigned int rxq_idx;
>
> + xa_erase(&net_devmem_dmabuf_bindings, binding->id);
> +
> + /* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
> + * erase.
> + */
> + synchronize_net();
Is the above statement always true? can the dmabuf being stuck in some
qdisc? or even some local socket due to redirect?
> @@ -252,13 +261,23 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
> * binding can be much more flexible than that. We may be able to
> * allocate MTU sized chunks here. Leave that for future work...
> */
> - binding->chunk_pool =
> - gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev));
> + binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
> + dev_to_node(&dev->dev));
> if (!binding->chunk_pool) {
> err = -ENOMEM;
> goto err_unmap;
> }
>
> + if (direction == DMA_TO_DEVICE) {
> + binding->tx_vec = kvmalloc_array(dmabuf->size / PAGE_SIZE,
> + sizeof(struct net_iov *),
> + GFP_KERNEL);
> + if (!binding->tx_vec) {
> + err = -ENOMEM;
> + goto err_free_chunks;
Possibly my comment on v3 has been lost:
"""
It looks like the later error paths (in the for_each_sgtable_dma_sg()
loop) could happen even for 'direction == DMA_TO_DEVICE', so I guess an
additional error label is needed to clean tx_vec on such paths.
"""
[...]
> @@ -1071,6 +1072,16 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
>
> flags = msg->msg_flags;
>
> + sockc = (struct sockcm_cookie){ .tsflags = READ_ONCE(sk->sk_tsflags),
> + .dmabuf_id = 0 };
> + if (msg->msg_controllen) {
> + err = sock_cmsg_send(sk, msg, &sockc);
> + if (unlikely(err)) {
> + err = -EINVAL;
> + goto out_err;
> + }
> + }
I'm unsure how much that would be a problem, but it looks like that
unblocking sendmsg(MSG_FASTOPEN) with bad msg argument will start to
fail on top of this patch, while they should be successful (EINPROGRESS)
before.
/P
Powered by blists - more mailing lists