[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+bxN5nuctHT1J+4f1i1Ufdt4OQkrGSCaCNCcbzVKuwJMA@mail.gmail.com>
Date: Fri, 10 Feb 2017 15:17:27 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
"x86@...nel.org" <x86@...nel.org>,
Tobias Regnery <tobias.regnery@...il.com>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Alexander Potapenko <glider@...gle.com>,
kasan-dev <kasan-dev@...glegroups.com>,
LKML <linux-kernel@...r.kernel.org>,
stable <stable@...r.kernel.org>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH] x86/mm/ptdump: Fix soft lockup in page table walker.
On Fri, Feb 10, 2017 at 2:56 PM, Andrey Ryabinin
<aryabinin@...tuozzo.com> wrote:
> On 02/10/2017 04:02 PM, Dmitry Vyukov wrote:
>> On Fri, Feb 10, 2017 at 1:15 PM, Andrey Ryabinin
>> <aryabinin@...tuozzo.com> wrote:
>>>
>>>
>>> On 02/10/2017 02:18 PM, Thomas Gleixner wrote:
>>>> On Fri, 10 Feb 2017, Dmitry Vyukov wrote:
>>>>> This is the right thing to do per se, but I am concerned that now
>>>>> people will just suffers from slow boot (it can take literally
>>>>> minutes) and will not realize the root cause nor that it's fixable
>>>>> (e.g. with rodata=n) and will probably just blame KASAN for slowness.
>>>>>
>>>>> Could we default this rodata check to n under KASAN? Or at least print
>>>>> some explanatory warning message before doing marking rodata (it
>>>>> should be printed right before "hang", so if you stare at it for a
>>>>> minute during each boot you realize that it may be related)? Or
>>>>> something along these lines. FWIW in my builds I just always disable
>>>>> the check.
>>>>
>>>> That certainly makes sense and we emit such warnings in other places
>>>> already (lockdep, trace_printk ...)
>>>>
>>>
>>> Agreed, but perhaps it would be better to make this code faster for KASAN=y?
>>> The main problem here is that we have many pgd entries containing kasan_zero_pud values
>>> and ptdump walker checks kasan_zero_pud many times.
>>> Instead, we could check it only once and skip further kasan_zero_pud's.
>>>
>>> I can't say I like this hack very much, but it wins me almost 20 seconds of boot time.
>>> Any objections?
Looks good to me.
>> Now I remember that we already discussed it in this thread:
>> https://lkml.org/lkml/2016/11/8/775
>>
>> Andrey, you proposed:
>>
>> "I didn't look at any code, but we probably could can remember last
>> visited pgd and skip next pgd if it's the same as previous."
>>
>> Do you still think it's a good idea?
>
> Ah, indeed. It will do roughly the same but with less of code churn, see bellow.
>
>> Walking the same pgd multiple times does not make sense (right?). And
>> it could probably speedup non-kasan builds to some degree in some
>> contexts. And the code will be free of additional ifdefs.
>>
>
> We could make it without ifdefs but this would be useless for KASAN=n
> as page table entries normally unique. So I'm thinking to add #ifdef
> at least for documentation purposes.
>
>
>
> diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
> index 8aa6bea..1599a5c 100644
> --- a/arch/x86/mm/dump_pagetables.c
> +++ b/arch/x86/mm/dump_pagetables.c
> @@ -373,6 +373,11 @@ static inline bool is_hypervisor_range(int idx)
> #endif
> }
>
> +static bool pgd_already_checked(pgd_t *prev_pgd, pgd_t *pgd, bool checkwx)
> +{
> + return checkwx && prev_pgd && (pgd_val(*prev_pgd) == pgd_val(*pgd));
> +}
> +
> static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
> bool checkwx)
> {
> @@ -381,6 +386,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
> #else
> pgd_t *start = swapper_pg_dir;
> #endif
> + pgd_t *prev_pgd = NULL;
> pgprotval_t prot;
> int i;
> struct pg_state st = {};
> @@ -396,7 +402,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
>
> for (i = 0; i < PTRS_PER_PGD; i++) {
> st.current_address = normalize_addr(i * PGD_LEVEL_MULT);
> - if (!pgd_none(*start) && !is_hypervisor_range(i)) {
> + if (!pgd_none(*start) && !is_hypervisor_range(i) &&
> + !pgd_already_checked(prev_pgd, start, checkwx)) {
> if (pgd_large(*start) || !pgd_present(*start)) {
> prot = pgd_flags(*start);
> note_page(m, &st, __pgprot(prot), 1);
> @@ -408,6 +415,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
> note_page(m, &st, __pgprot(0), 1);
>
> cond_resched();
> + prev_pgd = start;
> start++;
> }
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@...glegroups.com.
> To post to this group, send email to kasan-dev@...glegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/730837a1-ee6f-9891-0421-93616dd1c4eb%40virtuozzo.com.
> For more options, visit https://groups.google.com/d/optout.
Powered by blists - more mailing lists