[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190624100742.GM24419@MiWiFi-R3L-srv>
Date: Mon, 24 Jun 2019 18:07:42 +0800
From: Baoquan He <bhe@...hat.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Kyle Pelton <kyle.d.pelton@...el.com>
Subject: Re: [PATCH] x86/mm: Handle physical-virtual alignment mismatch in
phys_p4d_init()
On 06/21/19 at 01:54pm, Kirill A. Shutemov wrote:
> > The code block as below is to zero p4d entries which are not coverred by
> > the current memory range, and if haven't been mapped already. It's
> > clearred away in this patch, could you also mention it in log, and tell
> > why it doesn't matter now?
> >
> > If it doesn't matter, should we clear away the simillar code in
> > phys_pud_init/phys_pmd_init/phys_pte_init? Maybe a prep patch to do the
> > clean up?
>
> It only matters for the levels that contains page table entries that can
> point to pages, not page tables. There's no p4d or pgd huge pages on x86.
> Otherwise we only leak page tables without any benefit.
Ah, I checked git history, didn't find why it's added. I just Have a
superficial knowledge of the clearing, but in a low-efficiency way.
>
> We might have this on all leveles under p?d_large() condition and don't
> touch page tables at all.
I see.
>
> BTW, it all becomes rather risky for this late in the release cycle. Maybe
> we should revert the original patch and try again later with more
> comprehansive solution?
It's not added in one time. I am fine with your current change, would be
much better if mention it in log, and also add code comment above the
clearing code. Surely reverting and trying later with more comprehensive
solution is also good to me, this need a little more effort.
Thanks
Baoquan
Powered by blists - more mailing lists