lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a05d8116-4363-ed3d-d457-b0f363882995@gmail.com>
Date:   Wed, 3 Apr 2019 10:27:26 -0700
From:   Florian Fainelli <f.fainelli@...il.com>
To:     Will Deacon <will.deacon@....com>, pierre kuo <vichy.kuo@...il.com>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Steven Price <steven.price@....com>,
        linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        ard.biesheuvel@...aro.org
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear
 mapping range check

On 4/3/19 10:24 AM, Will Deacon wrote:
> On Thu, Apr 04, 2019 at 12:44:25AM +0800, pierre kuo wrote:
>>> On Mon, Apr 01, 2019 at 10:59:53PM +0800, pierre kuo wrote:
>>>>>> With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
>>>>>> after the place where you moved the initrd_{start,end} setting, which
>>>>>> would result in a different value for __phys_to_virt(phys_initrd_start).
>>>>>
>>>>> I found what you mean, since __phys_to_virt will use PHYS_OFFSET
>>>>> (memstart_addr) for calculating.
>>>>> How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
>>>>> CONFIG_BLK_DEV_INITRD checking?
>>>>>
>>>>> That means below (d) moving ahead of (c)
>>>>> prvious:
>>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
>>>>> if (memory_limit != PHYS_ADDR_MAX) {}            ---(b)
>>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
>>>>>
>>>>> now:
>>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
>>>>> if (memory_limit != PHYS_ADDR_MAX) {}              ----------------(b)
>>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}  --------------(d)
>>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}  ---(c)
>>>>>
>>>>
>>>> After tracing the kernel code,
>>>> is it even possible to move CONFIG_RANDOMIZE_BASE part of code ahead
>>>> of memory_limit?
>>>>
>>>> that mean the flow may look like below:
>>>> now2:
>>>> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
>>>> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)
>>>> if (memory_limit != PHYS_ADDR_MAX) {}            ---(b)
>>>> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
>>>>
>>>> in (b), the result of __pa_symbol(_text), memory_limit,
>>>> memblock_mem_limit_remove_map and  memblock_add
>>>> are not depended on memsart_addr.
>>>> So the now2 flow can grouping modification of memstart_address, put
>>>> (a) and (d) together.
>>>
>>> I'm afraid that you've lost me with this.
>> welcome for ur kind suggestion ^^
>>
>>> Why isn't it just as simple as
>>> the diff below?
>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>>> index c29b17b520cd..ec3487c94b10 100644
>>> --- a/arch/arm64/mm/init.c
>>> +++ b/arch/arm64/mm/init.c
>>> @@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
>>>                          base + size > memblock_start_of_DRAM() +
>>>                                        linear_region_size,
>>>                         "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
>>> -                       initrd_start = 0;
>>> +                       phys_initrd_size = 0;
>>>                 } else {
>>>                         memblock_remove(base, size); /* clear MEMBLOCK_ flags */
>>>                         memblock_add(base, size);
>>
>> This patch will also fix the issue, but it still needs 2 "if
>> comparisions" for getting initrd_start/initrd_end.
>> By possible grouping modification of memstart_address, and put
>> initrd_start/initrd_end to be calculated only when linear mapping check
>> pass. Maybe (just if) can let the code be more concise.
> 
> Maybe, but I don't think we've seen a patch which accomplishes that. I think
> I'll go ahead and commit the basic one-liner, then we can always improve it
> afterwards if somebody sends a patch. It's not like this is a fastpath.

Sorry for the slow response and introducing the bug in the first place,
yes, I agree here, an one-liner is a better way to get that fixed:

Acked-by: Florian Fainelli <f.fainelli@...il.com>
-- 
Florian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ