[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLUPR13MB02892B82F5EB663B2FECF20EDFB40@BLUPR13MB0289.namprd13.prod.outlook.com>
Date: Tue, 25 Dec 2018 02:30:32 +0000
From: Yueyi Li <liyueyi@...e.com>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>
CC: "catalin.marinas@....com" <catalin.marinas@....com>,
"will.deacon@....com" <will.deacon@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"markus@...rhumer.com" <markus@...rhumer.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in
linear region
Hi Ard,
On 2018/12/24 17:45, Ard Biesheuvel wrote:
> Does the following change fix your issue as well?
>
> index 9b432d9fcada..9dcf0ff75a11 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
> * memory spans, randomize the linear region as well.
> */
> if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> - range = range / ARM64_MEMSTART_ALIGN + 1;
> + range /= ARM64_MEMSTART_ALIGN;
> memstart_addr -= ARM64_MEMSTART_ALIGN *
> ((range * memstart_offset_seed) >> 16);
> }
Yes, it can fix this also. I just think modify the first *range*
calculation would be easier to
grasp, what do you think?
Thanks,
Yueyi
Powered by blists - more mailing lists