[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <574AB6C6-2F2F-45CA-8B91-8EEF3D8ADAC4@fb.com>
Date: Tue, 20 Aug 2019 16:38:02 +0000
From: Song Liu <songliubraving@...com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Kernel Team <Kernel-team@...com>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Joerg Roedel <jroedel@...e.de>,
Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
"Peter Zijlstra" <peterz@...radead.org>
Subject: Re: [PATCH] x86/mm/pti: in pti_clone_pgtable() don't increase addr by
PUD_SIZE
> On Aug 20, 2019, at 9:05 AM, Song Liu <songliubraving@...com> wrote:
>
>
>
>> On Aug 20, 2019, at 7:18 AM, Dave Hansen <dave.hansen@...el.com> wrote:
>>
>> On 8/20/19 7:14 AM, Song Liu wrote:
>>>> *But*, that shouldn't get hit on a Skylake CPU since those have PCIDs
>>>> and shouldn't have a global kernel image. Could you confirm whether
>>>> PCIDs are supported on this CPU?
>>> Yes, pcid is listed in /proc/cpuinfo.
>>
>> So what's going on? Could you confirm exactly which pti_clone_pgtable()
>> is causing you problems? Do you have a theory as to why this manifests
>> as a performance problem rather than a functional one?
>>
>> A diff of these:
>>
>> /sys/kernel/debug/page_tables/current_user
>> /sys/kernel/debug/page_tables/current_kernel
>>
>> before and after your patch might be helpful.
>
> I believe the difference is from the following entries (7 PMDs)
>
> Before the patch:
>
> current_kernel: 0xffffffff81000000-0xffffffff81e04000 14352K ro GLB x pte
> efi: 0xffffffff81000000-0xffffffff81e04000 14352K ro GLB x pte
> kernel: 0xffffffff81000000-0xffffffff81e04000 14352K ro GLB x pte
>
>
> After the patch:
>
> current_kernel: 0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
> efi: 0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
> kernel: 0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
>
> current_kernel and kernel show same data though.
A little more details on how I got here.
We use huge page for hot text and thus reduces iTLB misses. As we
benchmark 5.2 based kernel (vs. 4.16 based), we found ~2.5x more
iTLB misses.
To figure out the issue, I use a debug patch that dumps page table for
a pid. The following are information from the workload pid.
For the 4.16 based kernel:
host-4.16 # grep "x pmd" /sys/kernel/debug/page_tables/dump_pid
0x0000000000600000-0x0000000000e00000 8M USR ro PSE x pmd
0xffffffff81a00000-0xffffffff81c00000 2M ro PSE x pmd
For the 5.2 based kernel before this patch:
host-5.2-before # grep "x pmd" /sys/kernel/debug/page_tables/dump_pid
0x0000000000600000-0x0000000000e00000 8M USR ro PSE x pmd
The 8MB text in pmd is from user space. 4.16 kernel has 1 pmd for the
irq entry table; while 4.16 kernel doesn't have it.
For the 5.2 based kernel after this patch:
host-5.2-after # grep "x pmd" /sys/kernel/debug/page_tables/dump_pid
0x0000000000600000-0x0000000000e00000 8M USR ro PSE x pmd
0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
So after this patch, the 5.2 based kernel has 7 PMDs instead of 1 PMD
in 4.16 kernel. This further reduces iTLB miss rate
Thanks,
Song
Powered by blists - more mailing lists