lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Jan 2020 23:46:46 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Eric Dumazet <edumazet@...gle.com>, Christoph Hellwig <hch@....de>,
        Joerg Roedel <jroedel@...e.de>
Cc:     iommu@...ts.linux-foundation.org,
        Eric Dumazet <eric.dumazet@...il.com>,
        Geert Uytterhoeven <geert@...ux-m68k.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] dma-debug: dynamic allocation of hash table

Hi Eric,

On 2020-01-30 7:10 pm, Eric Dumazet via iommu wrote:
> Increasing the size of dma_entry_hash size by 327680 bytes
> has reached some bootloaders limitations.

[ That might warrant some further explanation - I don't quite follow how 
this would relate to a bootloader specifically :/ ]

> Simply use dynamic allocations instead, and take
> this opportunity to increase the hash table to 65536
> buckets. Finally my 40Gbit mlx4 NIC can sustain
> line rate with CONFIG_DMA_API_DEBUG=y.

That's pretty cool, but I can't help but wonder if making the table 
bigger caused a problem in the first place, whether making it bigger yet 
again in the name of a fix is really the wisest move. How might this 
impact DMA debugging on 32-bit embedded systems with limited vmalloc 
space and even less RAM, for instance? More to the point, does vmalloc() 
even work for !CONFIG_MMU builds? Obviously we don't want things to be 
*needlessly* slow if avoidable, but is there a genuine justification for 
needing to optimise what is fundamentally an invasive heavyweight 
correctness check - e.g. has it helped expose race conditions that were 
otherwise masked?

That said, by moving to dynamic allocation maybe there's room to be 
cleverer and make HASH_SIZE scale with, say, system memory size? (I 
assume from the context it's not something we can expand on-demand like 
we did for the dma_debug_entry pool)

Robin.

> Fixes: 5e76f564572b ("dma-debug: increase HASH_SIZE")
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Reported-by: Geert Uytterhoeven <geert@...ux-m68k.org>
> Cc: Christoph Hellwig <hch@....de>
> ---
>   kernel/dma/debug.c | 10 ++++++++--
>   1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
> index 2031ed1ad7fa109bb8a8c290bbbc5f825362baba..a310dbb1515e92c081f8f3f9a7290dd5e53fc889 100644
> --- a/kernel/dma/debug.c
> +++ b/kernel/dma/debug.c
> @@ -27,7 +27,7 @@
>   
>   #include <asm/sections.h>
>   
> -#define HASH_SIZE       16384ULL
> +#define HASH_SIZE       65536ULL
>   #define HASH_FN_SHIFT   13
>   #define HASH_FN_MASK    (HASH_SIZE - 1)
>   
> @@ -90,7 +90,8 @@ struct hash_bucket {
>   };
>   
>   /* Hash list to save the allocated dma addresses */
> -static struct hash_bucket dma_entry_hash[HASH_SIZE];
> +static struct hash_bucket *dma_entry_hash __read_mostly;
> +
>   /* List of pre-allocated dma_debug_entry's */
>   static LIST_HEAD(free_entries);
>   /* Lock for the list above */
> @@ -934,6 +935,10 @@ static int dma_debug_init(void)
>   	if (global_disable)
>   		return 0;
>   
> +	dma_entry_hash = vmalloc(HASH_SIZE * sizeof(*dma_entry_hash));
> +	if (!dma_entry_hash)
> +		goto err;
> +
>   	for (i = 0; i < HASH_SIZE; ++i) {
>   		INIT_LIST_HEAD(&dma_entry_hash[i].list);
>   		spin_lock_init(&dma_entry_hash[i].lock);
> @@ -950,6 +955,7 @@ static int dma_debug_init(void)
>   		pr_warn("%d debug entries requested but only %d allocated\n",
>   			nr_prealloc_entries, nr_total_entries);
>   	} else {
> +err:
>   		pr_err("debugging out of memory error - disabled\n");
>   		global_disable = true;
>   
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ