lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOVJa8HSB34ggku=96KAb6qju3G5-uGYFxCE6O2eQRMwU4bd1A@mail.gmail.com>
Date:   Sun, 31 Mar 2019 23:14:44 +0800
From:   pierre kuo <vichy.kuo@...il.com>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Steven Price <steven.price@....com>,
        Will Deacon <will.deacon@....com>,
        linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        Florian Fainelli <f.fainelli@...il.com>
Subject: Re: [PATCH v2 1/1] initrd: move initrd_start calculate within linear
 mapping range check

hi Catalin:

> On Thu, Mar 14, 2019 at 11:20:47AM +0800, pierre Kuo wrote:
> > in the previous case, initrd_start and initrd_end can be successfully
> > returned either (base < memblock_start_of_DRAM()) or (base + size >
> > memblock_start_of_DRAM() + linear_region_size).
> >
> > That means even linear mapping range check fail for initrd_start and
> > initrd_end, it still can get virtual address. Here we put
> > initrd_start/initrd_end to be calculated only when linear mapping check
> > pass.
> >
> > Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
>
> For future versions, please also cc the author of the original commit
> you are fixing.

Got it and thanks for ur warm reminder ^^

> >
> >  arch/arm64/mm/init.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 7205a9085b4d..1adf418de685 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -425,6 +425,9 @@ void __init arm64_memblock_init(void)
> >                       memblock_remove(base, size); /* clear MEMBLOCK_ flags */
> >                       memblock_add(base, size);
> >                       memblock_reserve(base, size);
> > +                     /* the generic initrd code expects virtual addresses */
> > +                     initrd_start = __phys_to_virt(phys_initrd_start);
> > +                     initrd_end = initrd_start + phys_initrd_size;
> >               }
> >       }
> >
> > @@ -450,11 +453,6 @@ void __init arm64_memblock_init(void)
> >        * pagetables with memblock.
> >        */
> >       memblock_reserve(__pa_symbol(_text), _end - _text);
> > -     if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
> > -             /* the generic initrd code expects virtual addresses */
> > -             initrd_start = __phys_to_virt(phys_initrd_start);
> > -             initrd_end = initrd_start + phys_initrd_size;
> > -     }
>
> With CONFIG_RANDOMIZE_BASE we can get a further change to memstart_addr
> after the place where you moved the initrd_{start,end} setting, which
> would result in a different value for __phys_to_virt(phys_initrd_start).

I found what you mean, since __phys_to_virt will use PHYS_OFFSET
(memstart_addr) for calculating.
How about moving CONFIG_RANDOMIZE_BASE part of code ahead of
CONFIG_BLK_DEV_INITRD checking?

That means below (d) moving ahead of (c)
prvious:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} ---(a)
if (memory_limit != PHYS_ADDR_MAX) {}            ---(b)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {} ---(c)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {} ---(d)

now:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { ---(a)
if (memory_limit != PHYS_ADDR_MAX) {}              ----------------(b)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}  --------------(d)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}  ---(c)

Appreciate your kind advice.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ