[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLUPR13MB028993B73A531E5C7EA62DCEDF820@BLUPR13MB0289.namprd13.prod.outlook.com>
Date: Wed, 16 Jan 2019 03:37:37 +0000
From: Yueyi Li <liyueyi@...e.com>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will.deacon@....com" <will.deacon@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
CC: "linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"markus@...rhumer.com" <markus@...rhumer.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in
linear region
OK, thanks. But seems this mail be ignored, do i need re-sent the patch?
On 2018/12/26 21:49, Ard Biesheuvel wrote:
> On Tue, 25 Dec 2018 at 03:30, Yueyi Li <liyueyi@...e.com> wrote:
>> Hi Ard,
>>
>>
>> On 2018/12/24 17:45, Ard Biesheuvel wrote:
>>> Does the following change fix your issue as well?
>>>
>>> index 9b432d9fcada..9dcf0ff75a11 100644
>>> --- a/arch/arm64/mm/init.c
>>> +++ b/arch/arm64/mm/init.c
>>> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
>>> * memory spans, randomize the linear region as well.
>>> */
>>> if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
>>> - range = range / ARM64_MEMSTART_ALIGN + 1;
>>> + range /= ARM64_MEMSTART_ALIGN;
>>> memstart_addr -= ARM64_MEMSTART_ALIGN *
>>> ((range * memstart_offset_seed) >> 16);
>>> }
>> Yes, it can fix this also. I just think modify the first *range*
>> calculation would be easier to grasp, what do you think?
>>
> I don't think there is a difference, to be honest, but I will leave it
> up to the maintainers to decide which approach they prefer.
Powered by blists - more mailing lists