lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200721041550-mutt-send-email-mst@kernel.org>
Date:   Tue, 21 Jul 2020 04:18:51 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Shile Zhang <shile.zhang@...ux.alibaba.com>
Cc:     Jason Wang <jasowang@...hat.com>,
        virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, kernel test robot <lkp@...el.com>,
        Jiang Liu <liuj97@...il.com>, linux-pci@...r.kernel.org,
        bhelgaas@...gle.com
Subject: Re: [PATCH v2] virtio_ring: use alloc_pages_node for NUMA-aware
 allocation

On Tue, Jul 21, 2020 at 03:00:13PM +0800, Shile Zhang wrote:
> Use alloc_pages_node() allocate memory for vring queue with proper
> NUMA affinity.
> 
> Reported-by: kernel test robot <lkp@...el.com>
> Suggested-by: Jiang Liu <liuj97@...il.com>
> Signed-off-by: Shile Zhang <shile.zhang@...ux.alibaba.com>

Do you observe any performance gains from this patch?

I also wonder why isn't the probe code run on the correct numa node?
That would fix a wide class of issues like this without need to tweak
drivers.

Bjorn, what do you think? Was this considered?

> ---
> Changelog
> v1 -> v2:
> - fixed compile warning reported by LKP.
> ---
>  drivers/virtio/virtio_ring.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 58b96baa8d48..d38fd6872c8c 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -276,9 +276,11 @@ static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
>  		return dma_alloc_coherent(vdev->dev.parent, size,
>  					  dma_handle, flag);
>  	} else {
> -		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
> -
> -		if (queue) {
> +		void *queue = NULL;
> +		struct page *page = alloc_pages_node(dev_to_node(vdev->dev.parent),
> +						     flag, get_order(size));
> +		if (page) {
> +			queue = page_address(page);
>  			phys_addr_t phys_addr = virt_to_phys(queue);
>  			*dma_handle = (dma_addr_t)phys_addr;
>  
> @@ -308,7 +310,7 @@ static void vring_free_queue(struct virtio_device *vdev, size_t size,
>  	if (vring_use_dma_api(vdev))
>  		dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
>  	else
> -		free_pages_exact(queue, PAGE_ALIGN(size));
> +		free_pages((unsigned long)queue, get_order(size));
>  }
>  
>  /*
> -- 
> 2.24.0.rc2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ