[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76b4ee49-e3b0-34be-6b81-f58bc2bc3dcc@sberdevices.ru>
Date: Mon, 7 Nov 2022 21:31:20 +0000
From: Arseniy Krasnov <AVKrasnov@...rdevices.ru>
To: Christophe JAILLET <christophe.jaillet@...adoo.fr>,
Stefano Garzarella <sgarzare@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
"Krasnov Arseniy" <oxffffaa@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
kernel <kernel@...rdevices.ru>
Subject: Re: [RFC PATCH v3 01/11] virtio/vsock: rework packet allocation logic
On 08.11.2022 00:24, Christophe JAILLET wrote:
> Le 07/11/2022 à 06:23, Arseniy Krasnov a écrit :
>> On 06.11.2022 22:50, Christophe JAILLET wrote:
>>> Le 06/11/2022 à 20:36, Arseniy Krasnov a écrit :
>>>> To support zerocopy receive, packet's buffer allocation is changed: for
>>>> buffers which could be mapped to user's vma we can't use 'kmalloc()'(as
>>>> kernel restricts to map slab pages to user's vma) and raw buddy
>>>> allocator now called. But, for tx packets(such packets won't be mapped
>>>> to user), previous 'kmalloc()' way is used, but with special flag in
>>>> packet's structure which allows to distinguish between 'kmalloc()' and
>>>> raw pages buffers.
>>>>
>>>> Signed-off-by: Arseniy Krasnov <AVKrasnov@...rdevices.ru>
>>>> ---
>>>> drivers/vhost/vsock.c | 1 +
>>>> include/linux/virtio_vsock.h | 1 +
>>>> net/vmw_vsock/virtio_transport.c | 8 ++++++--
>>>> net/vmw_vsock/virtio_transport_common.c | 10 +++++++++-
>>>> 4 files changed, 17 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>>>> index 5703775af129..65475d128a1d 100644
>>>> --- a/drivers/vhost/vsock.c
>>>> +++ b/drivers/vhost/vsock.c
>>>> @@ -399,6 +399,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
>>>> return NULL;
>>>> }
>>>> + pkt->slab_buf = true;
>>>> pkt->buf_len = pkt->len;
>>>> nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter);
>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>> index 35d7eedb5e8e..d02cb7aa922f 100644
>>>> --- a/include/linux/virtio_vsock.h
>>>> +++ b/include/linux/virtio_vsock.h
>>>> @@ -50,6 +50,7 @@ struct virtio_vsock_pkt {
>>>> u32 off;
>>>> bool reply;
>>>> bool tap_delivered;
>>>> + bool slab_buf;
>>>> };
>>>> struct virtio_vsock_pkt_info {
>>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>>> index ad64f403536a..19909c1e9ba3 100644
>>>> --- a/net/vmw_vsock/virtio_transport.c
>>>> +++ b/net/vmw_vsock/virtio_transport.c
>>>> @@ -255,16 +255,20 @@ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
>>>> vq = vsock->vqs[VSOCK_VQ_RX];
>>>> do {
>>>> + struct page *buf_page;
>>>> +
>>>> pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
>>>> if (!pkt)
>>>> break;
>>>> - pkt->buf = kmalloc(buf_len, GFP_KERNEL);
>>>> - if (!pkt->buf) {
>>>> + buf_page = alloc_page(GFP_KERNEL);
>>>> +
>>>> + if (!buf_page) {
>>>> virtio_transport_free_pkt(pkt);
>>>> break;
>>>> }
>>>> + pkt->buf = page_to_virt(buf_page);
>>>> pkt->buf_len = buf_len;
>>>> pkt->len = buf_len;
>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>> index a9980e9b9304..37e8dbfe2f5d 100644
>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>> @@ -69,6 +69,7 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
>>>> if (!pkt->buf)
>>>> goto out_pkt;
>>>> + pkt->slab_buf = true;
>>>> pkt->buf_len = len;
>>>> err = memcpy_from_msg(pkt->buf, info->msg, len);
>>>> @@ -1339,7 +1340,14 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt);
>>>> void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt)
>>>> {
>>>> - kvfree(pkt->buf);
>>>> + if (pkt->buf_len) {
>>>> + if (pkt->slab_buf)
>>>> + kvfree(pkt->buf);
>>>
>>> Hi,
>>>
>>> kfree()? (according to virtio_transport_alloc_pkt() in -next) or something else need to be changed.
>>>
>> Hello Cristophe,
>>
>> I think, 'kvfree()' is still needed here, because buffer for packet could be allocated by 'kvmalloc()'
>> in drivers/vhost/vsock.c. Correct me if i'm wrong.
>
> Agreed.
>
>>
>>>> + else
>>>> + free_pages((unsigned long)pkt->buf,
>>>> + get_order(pkt->buf_len));
>>>
>>> In virtio_vsock_rx_fill(), only alloc_page() is used.
>>>
>>> Should this be alloc_pages(.., get_order(buf_len)) or free_page() (without an 's') here?
>> This function frees packets which were allocated in vhost path also. In vhost, for zerocopy
>> packets alloc_pageS() is used.
>
> Ok. Seen in patch 5/11.
>
> But wouldn't it be cleaner and future-proof to also have alloc_pageS() in virtio_vsock_rx_fill(), even if get_order(buf->len) is kwown to be 0?
>
> What, if for some reason a PAGE_SIZE was < 4kb for a given arch, or VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE increased?
Yes, i see. You're right. It will be correct to use alloc_pages(get_order(buf->len)), because in current version, real length of
packet buffer(e.g. single page) is totally unrelated with VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE. I'll fix it in next version.
Thank You
>
> CJ
>
>>
>> Thank You, Arseniy
>>>
>>> Just my 2c,
>>>
>>> CJ
>>>
>>>> + }
>>>> +
>>>> kfree(pkt);
>>>> }
>>>> EXPORT_SYMBOL_GPL(virtio_transport_free_pkt);
>>>
>>
>
Powered by blists - more mailing lists