[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30ea2fa3-0e4d-788b-b990-3bdb9e687377@acm.org>
Date: Wed, 26 Oct 2022 09:29:21 -0700
From: Bart Van Assche <bvanassche@....org>
To: Dawei Li <set_pte_at@...look.com>, axboe@...nel.dk
Cc: hch@....de, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] block: simplify blksize_bits() implementation
On 10/26/22 08:14, Dawei Li wrote:
> Convert current looping-based implementation into bit operation,
> which can bring improvement for:
>
> 1) bitops is more efficient for its arch-level optimization.
As far as I know blksize_bits() is not used in the hot path so
performance of this function is not critical.
> 2) Given that blksize_bits() is inline, _if_ @size is compile-time
> constant, it's possible that order_base_2() _may_ make output
> compile-time evaluated, depending on code context and compiler behavior.
>
> Signed-off-by: Dawei Li <set_pte_at@...look.com>
> ---
> include/linux/blkdev.h | 7 +------
> 1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 50e358a19d98..117061c8b9a1 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -1349,12 +1349,7 @@ static inline int blk_rq_aligned(struct request_queue *q, unsigned long addr,
> /* assumes size > 256 */
> static inline unsigned int blksize_bits(unsigned int size)
> {
> - unsigned int bits = 8;
> - do {
> - bits++;
> - size >>= 1;
> - } while (size > 256);
> - return bits;
> + return size > 512 ? order_base_2(size) : 9;
> }
How about optimizing this function even further by eliminating the
ternary operator, e.g. as follows (untested)?
return order_base_2(size >> SECTOR_SHIFT) + SECTOR_SHIFT;
Thanks,
Bart.
Powered by blists - more mailing lists