lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fc56daf-7f8d-b62b-b6bf-4da4ca87ea20@cybernetics.com>
Date:   Mon, 5 Dec 2022 13:07:15 -0500
From:   Tony Battersby <tonyb@...ernetics.com>
To:     Keith Busch <kbusch@...a.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>
Cc:     Keith Busch <kbusch@...nel.org>
Subject: Re: [PATCH 11/11] dmapool: link blocks across pages

On 12/5/22 09:59, Keith Busch wrote:
> From: Keith Busch <kbusch@...nel.org>
>
> The allocated dmapool pages are never freed for the lifetime of the
> pool. There is no need for the two level list+stack lookup for finding a
> free block since nothing is ever removed from the list. Just use a
> simple stack, reducing time complexity to constant.
>
> The implementation inserts the stack linking elements and the dma handle
> of the block within itself when freed. This means the smallest possible
> dmapool block is increased to at most 16 bytes to accomodate these
> fields, but there are no exisiting users requesting a dma pool smaller
> than that anyway.

Great work!

I notice that the comment at the top of dmapool.c describes the old
design ("Free blocks are tracked in an unsorted singly-linked
list of free blocks within the page."), so you need to delete or update
that part of the comment.

>  struct dma_pool {		/* the pool */
>  	struct list_head page_list;
>  	spinlock_t lock;
>  	struct device *dev;
> +	struct dma_block *next_block;
>  	unsigned int size;
>  	unsigned int allocation;
>  	unsigned int boundary;
> +	unsigned int nr_blocks;
> +	unsigned int nr_active;
> +	unsigned int nr_pages;

I think nr_blocks, nr_active, and nr_pages should be size_t rather than
unsigned int since they count the number of objects in the entire pool,
and it would be theoretically possible to allocate more than 2^32 objects.


> @@ -199,22 +217,24 @@ EXPORT_SYMBOL(dma_pool_create);
>  
>  static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
>  {
> -	unsigned int offset = 0;
> -	unsigned int next_boundary = pool->boundary;
> -
> -	page->in_use = 0;
> -	page->offset = 0;
> -	do {
> -		unsigned int next = offset + pool->size;
> -		if (unlikely((next + pool->size) >= next_boundary)) {
> -			next = next_boundary;
> +	unsigned int next_boundary = pool->boundary, offset = 0;
> +	struct dma_block *block;
> +
> +	while (offset < pool->allocation) {
> +		if (offset > next_boundary) {

This is incorrect.  I believe the correct comparison should be:

+    while (offset + pool->size <= pool->allocation) {
+        if (offset + pool->size > next_boundary) {

That should handle all the weird possible combinations of size,
boundary, and allocation.

Tony Battersby
Cybernetics

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ