lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b75da91-c24d-4d54-e6ac-ff580141fda9@arm.com>
Date:   Wed, 8 Jul 2020 10:11:30 -0500
From:   Jeremy Linton <jeremy.linton@....com>
To:     Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
        Christoph Hellwig <hch@....de>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Robin Murphy <robin.murphy@....com>,
        David Rientjes <rientjes@...gle.com>
Cc:     linux-rpi-kernel@...ts.infradead.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] dma-pool: use single atomic pool for both DMA zones

Hi,

On 7/8/20 5:35 AM, Nicolas Saenz Julienne wrote:
> Hi Jim,
> 
> On Tue, 2020-07-07 at 17:08 -0500, Jeremy Linton wrote:
>> Hi,
>>
>> I spun this up on my 8G model using the PFTF firmware from:
>>
>> https://github.com/pftf/RPi4/releases
>>
>> Which allows me to switch between ACPI/DT on the machine. In DT mode it
>> works fine now,
> 
> Nice, would that count as a Tested-by from you?

If it worked... :)

> 
>> but with ACPI I continue to have failures unless I
>> disable CMA via cma=0 on the kernel command line.
> 
> Yes, I see why, in atomic_pool_expand() memory is allocated from CMA without
> checking its correctness. That calls for a separate fix. I'll try to think of
> something.
> 
>> It think that is because
>>
>> using DT:
>>
>> [    0.000000] Reserved memory: created CMA memory pool at
>> 0x0000000037400000, size 64 MiB
>>
>>
>> using ACPI:
>> [    0.000000] cma: Reserved 64 MiB at 0x00000000f8000000
>>
>> Which is AFAIK because the default arm64 CMA allocation is just below
>> the arm64_dma32_phys_limit.
> 
> As I'm sure you know, we fix the CMA address trough DT, isn't that possible
> trough ACPI?

Well there isn't a linux specific cma location property in ACPI. There 
are various ways to infer the information, like looking for the lowest 
_DMA() range and using that to lower the arm64_dma32_phys_limit. OTOH, 
as it stands I don't think that information is available early enough to 
setup the cma pool.

But as you mention the atomic pool code is allocating from CMA under the 
assumption that its going to be below the GFP_DMA range, which might not 
be generally true (due to lack of DT cma properties too?).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ