lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Feb 2014 00:24:50 +0200
From:	"Kirill A. Shutemov" <>
To:	Linus Torvalds <>
Cc:	Peter Anvin <>, Ingo Molnar <>,
	Thomas Gleixner <>,
	Peter Zijlstra <>,
	the arch/x86 maintainers <>,
	Linux Kernel Mailing List <>,
	Ning Qu <>
Subject: Re: [RFC] de-asmify the x86-64 system call slowpath

On Thu, Feb 06, 2014 at 01:29:13PM -0800, Linus Torvalds wrote:
> On Wed, Feb 5, 2014 at 8:33 PM, Linus Torvalds
> <> wrote:
> >
> > Doing the gang-lookup is hard, since it's all abstracted away, but the
> > attached patch kind of tries to do what I described.
> >
> > This patch probably doesn't work, but something *like* this might be
> > worth playing with.
> Interesting. Here are some pte fault statistics with and without the patch.
> I added a few new count_vm_event() counters: PTEFAULT, PTEFILE,
> PTEANON, PTEWP, PTESPECULATIVE for the handle_pte_fault,
> do_linear_fault, do_anonymous_page, do_wp_page and the "let's
> speculatively fill the page tables" case.
> This is what the statistics look like for me doing a "make -j" of a
> fully built almodconfig build:
>   5007450       ptefault
>   3272038       ptefile
>   1470242       pteanon
>    265121       ptewp
>         0       ptespeculative
> where obviously the ptespeculative count is zero, and I was wrong
> about anon faults being most common - the file mapping faults really
> are the most common for this load (it's fairly fork/exec heavy, I
> guess).
> This is what happens with that patch I posted:
>   2962090       ptefault
>   1195130       ptefile
>   1490460       pteanon
>    276479       ptewp
>   5690724       ptespeculative
> about 200k page faults went away, and the numbers make sense (ie they
> got removed from the ptefile column - the other number changes are
> just noise).
> Now, we filled 600k extra page table entries to do that (that
> ptespeculative number), so the "hitrate" for the speculative filling
> was basically about 33%. Which doesn't sound crazy - the code
> basically populates the 8 aligned pages around the faulting page.
> Now, because I didn't make this easily dynamically configurable I have
> no good way to really test timing, but the numbers says at least the
> concept works.
> Whether the reduced number of page faults and presumably better
> locality for the speculative prefilling makes up for the fact that 66%
> of the prefilled entries never get used is very debatable. But I think
> it's a somewhat interesting experiment, and the patch was certainly
> not hugely complicated.
> I should add a switch to turn this on/off and then do many builds in
> sequence to get some kind of idea of whether it actually changes
> performance. But if 5% of the overall time was literally spent on the
> *exception* part of the page fault (ie not counting all the work we do
> in the kernel), I think it's worth looking at this.

If we want to reduce number of page fault with less overhead we probably
should concentrate on minor page fault -- populate pte around fault
address which already in page cache. It should cover scripting use-case
pretty well.

We also can do reasonable batching in this case: make changes under the
same page table lock and group radix tree lookup. It may help with

I'm working on prototype of this. It's more invasive and too ugly at the
moment. Will see how it will go. I'll try to show something tomorrow.

Other idea: we could introduce less strict version of MAP_POPULATE to
populate only what we already have in page cache.

Heh! `man 3 mmap' actually suggests MAP_POPULATE | MAP_NONBLOCK.
What's the story beyond MAP_NONBLOCK?

 Kirill A. Shutemov
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists