lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 8 Jun 2020 13:53:17 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Geert Uytterhoeven <geert@...ux-m68k.org>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux IOMMU <iommu@...ts.linux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] dma-pool: Fix too large DMA pools on medium systems

On 2020-06-08 13:25, Geert Uytterhoeven wrote:
> Hi Robin,
> 
> On Mon, Jun 8, 2020 at 2:04 PM Robin Murphy <robin.murphy@....com> wrote:
>> On 2020-06-08 09:52, Geert Uytterhoeven wrote:
>>> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
>>> memory pools are much larger than intended (e.g. 2 MiB instead of 128
>>> KiB on a 256 MiB system).
>>>
>>> Fix this by correcting the calculation of the number of GiBs of RAM in
>>> the system.
>>>
>>> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
>>> Signed-off-by: Geert Uytterhoeven <geert@...ux-m68k.org>
> 
>>> --- a/kernel/dma/pool.c
>>> +++ b/kernel/dma/pool.c
>>> @@ -175,8 +175,8 @@ static int __init dma_atomic_pool_init(void)
>>>         * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
>>>         */
>>>        if (!atomic_pool_size) {
>>> -             atomic_pool_size = max(totalram_pages() >> PAGE_SHIFT, 1UL) *
>>> -                                     SZ_128K;
>>> +             unsigned long gigs = totalram_pages() >> (30 - PAGE_SHIFT);
>>> +             atomic_pool_size = max(gigs, 1UL) * SZ_128K;
>>>                atomic_pool_size = min_t(size_t, atomic_pool_size,
>>>                                         1 << (PAGE_SHIFT + MAX_ORDER-1));
>>>        }
>>
>> Nit: although this probably is right, it seems even less readable than
> 
> ">> (x - PAGE_SHIFT)" is a commonly used construct in the kernel.

Sure, but when "x" is a magic number there's still extra cognitive load 
in determining whether it's the *right* magic number ;)

Mostly, though, it was just the fact that an expression involving 5 
different units (bytes, pages, "gigs", bits, and whatever MAX_ORDER is) 
is inherently more challenging to follow than the equivalent thing 
framed in fewer, especially when it can be reasonably done in just two 
(bytes and pages).

Robin.

>> the broken version (where at least some at-a-glance 'dimensional
>> analysis' flags up "(number of pages) >> PAGE_SHIFT" as rather
>> suspicious). How about a something a little more self-explanatory, e.g.:
>>
>>          unsigned long pages = totalram_pages() * SZ_128K / SZ_1GB;
> 
> That multiplication will overflow on 32-bit systems (perhaps even on
> large 64-bit systems; any 47-bit addressing?).
> 
>          unsigned long pages = totalram_pages() / (SZ_1GB / SZ_128K);
> 
>>          atomic_pool_size = min(pages, MAX_ORDER_NR_PAGES) << PAGE_SHIFT;
>>          atomic_pool_size = max_t(size_t, atomic_pool_size, SZ_128K);
> 
> I agree this part is an improvement.
> 
> Gr{oetje,eeting}s,
> 
>                          Geert
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ