lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b97645ed-b524-a505-2993-e04a37b50d35@arm.com>
Date:   Tue, 31 May 2022 20:48:24 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Tony Battersby <tonyb@...ernetics.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     iommu@...ts.linux-foundation.org, kernel-team@...com,
        Matthew Wilcox <willy@...radead.org>,
        Keith Busch <kbusch@...nel.org>,
        Andy Shevchenko <andy.shevchenko@...il.com>,
        Tony Lindgren <tony@...mide.com>
Subject: Re: [PATCH 04/10] dmapool: improve accuracy of debug statistics

On 2022-05-31 19:17, Tony Battersby wrote:
> The "total number of blocks in pool" debug statistic currently does not
> take the boundary value into account, so it diverges from the "total
> number of blocks in use" statistic when a boundary is in effect.  Add a
> calculation for the number of blocks per allocation that takes the
> boundary into account, and use it to replace the inaccurate calculation.
> 
> This depends on the patch "dmapool: fix boundary comparison" for the
> calculated blks_per_alloc value to be correct.
> 
> Signed-off-by: Tony Battersby <tonyb@...ernetics.com>
> ---
>   mm/dmapool.c | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index 782143144a32..9e30f4425dea 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -47,6 +47,7 @@ struct dma_pool {		/* the pool */
>   	struct device *dev;
>   	unsigned int allocation;
>   	unsigned int boundary;
> +	unsigned int blks_per_alloc;
>   	char name[32];
>   	struct list_head pools;
>   };
> @@ -92,8 +93,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
>   		/* per-pool info, no real statistics yet */
>   		temp = scnprintf(next, size, "%-16s %4zu %4zu %4u %2u\n",

Nit: if we're tinkering with this, it's probably worth updating the 
whole function to use sysfs_emit{_at}().

>   				 pool->name, blocks,
> -				 (size_t) pages *
> -				 (pool->allocation / pool->size),
> +				 (size_t) pages * pool->blks_per_alloc,
>   				 pool->size, pages);
>   		size -= temp;
>   		next += temp;
> @@ -168,6 +168,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
>   	retval->size = size;
>   	retval->boundary = boundary;
>   	retval->allocation = allocation;
> +	retval->blks_per_alloc =
> +		(allocation / boundary) * (boundary / size) +
> +		(allocation % boundary) / size;

Do we really need to store this? Sure, 4 divisions (which could possibly 
be fewer given the constraints on boundary) isn't the absolute cheapest 
calculation, but I still can't imagine anyone would be polling sysfs 
stats hard enough to even notice.

Thanks,
Robin.

>   
>   	INIT_LIST_HEAD(&retval->pools);
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ