lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230315090739.128d4341@jacob-builder>
Date:   Wed, 15 Mar 2023 09:07:39 -0700
From:   Jacob Pan <jacob.jun.pan@...ux.intel.com>
To:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>, linux-mm@...ck.org,
        linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
        Robin Murphy <robin.murphy@....com>,
        jacob.jun.pan@...ux.intel.com
Subject: Re: [PATCH 09/10] iommu: Fix MAX_ORDER usage in
 __iommu_dma_alloc_pages()

Hi Kirill,

On Wed, 15 Mar 2023 14:31:32 +0300, "Kirill A. Shutemov"
<kirill.shutemov@...ux.intel.com> wrote:

> MAX_ORDER is not inclusive: the maximum allocation order buddy allocator
> can deliver is MAX_ORDER-1.
> 
> Fix MAX_ORDER usage in __iommu_dma_alloc_pages().
> 
> Also use GENMASK() instead of hard to read "(2U << order) - 1" magic.
> 
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Cc: Robin Murphy <robin.murphy@....com>
> Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> ---
>  drivers/iommu/dma-iommu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 99b2646cb5c7..ac996fd6bd9c 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct
> device *dev, struct page **pages;
>  	unsigned int i = 0, nid = dev_to_node(dev);
>  
> -	order_mask &= (2U << MAX_ORDER) - 1;
> +	order_mask &= GENMASK(MAX_ORDER - 1, 0);
>  	if (!order_mask)
>  		return NULL;
>  
> @@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct
> device *dev,
>  		 * than a necessity, hence using __GFP_NORETRY until
>  		 * falling back to minimum-order allocations.
>  		 */
> -		for (order_mask &= (2U << __fls(count)) - 1;
> +		for (order_mask &= GENMASK(__fls(count), 0);
>  		     order_mask; order_mask &= ~order_size) {
>  			unsigned int order = __fls(order_mask);
>  			gfp_t alloc_flags = gfp;
Reviewed-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
(For VT-d part, there is no functional impact at all. We only have 2M and 1G
page sizes, no SZ_8M page)

Thanks,

Jacob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ