[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <222b40f1-a36c-0375-e965-cd949e8b9eeb@linux.alibaba.com>
Date: Mon, 27 Jul 2020 21:10:09 +0800
From: Shile Zhang <shile.zhang@...ux.alibaba.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, kernel test robot <lkp@...el.com>,
Jiang Liu <liuj97@...il.com>, linux-pci@...r.kernel.org,
bhelgaas@...gle.com
Subject: Re: [PATCH v2] virtio_ring: use alloc_pages_node for NUMA-aware
allocation
On 2020/7/21 19:28, Shile Zhang wrote:
>
>
> On 2020/7/21 16:18, Michael S. Tsirkin wrote:
>> On Tue, Jul 21, 2020 at 03:00:13PM +0800, Shile Zhang wrote:
>>> Use alloc_pages_node() allocate memory for vring queue with proper
>>> NUMA affinity.
>>>
>>> Reported-by: kernel test robot <lkp@...el.com>
>>> Suggested-by: Jiang Liu <liuj97@...il.com>
>>> Signed-off-by: Shile Zhang <shile.zhang@...ux.alibaba.com>
>>
>> Do you observe any performance gains from this patch?
>
> Thanks for your comments!
> Yes, the bandwidth can boost more than doubled (from 30Gbps to 80GBps)
> with this changes in my test env (8 numa nodes), with netperf test.
>
>>
>> I also wonder why isn't the probe code run on the correct numa node?
>> That would fix a wide class of issues like this without need to tweak
>> drivers.
>
> Good point, I'll check this, thanks!
Sorry, I have no idea about how the probe code to grab the appropriate
NUMA node.
>
>>
>> Bjorn, what do you think? Was this considered?
Hi Bjorn, Could you please give any comments about this issue?
Thanks!
>>
>>> ---
>>> Changelog
>>> v1 -> v2:
>>> - fixed compile warning reported by LKP.
>>> ---
>>> drivers/virtio/virtio_ring.c | 10 ++++++----
>>> 1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>> index 58b96baa8d48..d38fd6872c8c 100644
>>> --- a/drivers/virtio/virtio_ring.c
>>> +++ b/drivers/virtio/virtio_ring.c
>>> @@ -276,9 +276,11 @@ static void *vring_alloc_queue(struct
>>> virtio_device *vdev, size_t size,
>>> return dma_alloc_coherent(vdev->dev.parent, size,
>>> dma_handle, flag);
>>> } else {
>>> - void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
>>> -
>>> - if (queue) {
>>> + void *queue = NULL;
>>> + struct page *page =
>>> alloc_pages_node(dev_to_node(vdev->dev.parent),
>>> + flag, get_order(size));
>>> + if (page) {
>>> + queue = page_address(page);
>>> phys_addr_t phys_addr = virt_to_phys(queue);
>>> *dma_handle = (dma_addr_t)phys_addr;
>>> @@ -308,7 +310,7 @@ static void vring_free_queue(struct virtio_device
>>> *vdev, size_t size,
>>> if (vring_use_dma_api(vdev))
>>> dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
>>> else
>>> - free_pages_exact(queue, PAGE_ALIGN(size));
>>> + free_pages((unsigned long)queue, get_order(size));
>>> }
>>> /*
>>> --
>>> 2.24.0.rc2
Powered by blists - more mailing lists