lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Dec 2013 21:46:25 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Peter Anvin <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Al Viro <viro@...iv.linux.org.uk>,
	the arch/x86 maintainers <x86@...nel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] speeding up the stat() family of system calls...


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> Here's both x86 people and filesystem people involved, because this
> hacky RFC patch touches both.
> 
> NOTE NOTE NOTE! I've modified "cp_new_stat()" in place, in a way that
> is x86-64 specific. So the attached patch *only* works on x86-64, and
> will very actively break on anything else. That's intentional, because
> that way it's more obvious how the code changes, but a real patch
> would create a *new* cp_new_stat() for x86-64, and conditionalize the
> existing generic "cp_new_stat()" on not already having an
> architecture-optimized one.
> 
> Basically, under some filesystem loads, "stat()" and friends are the
> most common ops (think tree traversal, but also things like git
> verifying the index). And our "cp_new_stat()" function (which is the
> common interface, ignoring 32-bit legacy stuff) is generic, but
> actually pretty disgusting. It copies things to a temporary 'struct
> stat' buffer on the kernel stack, and then uses copy_to_user() to copy
> it to user space. The double copy is quite noticeable in profiles, and
> it generates a big stack frame too.
> 
> By doing an architecture-specific cp_new_stat() function, we can
> improve on that.
> 
> HOWEVER. On x86, doing an efficient field-at-a-time copy also requires
> us to use put_user_try() and put_user_catch() in order to not have
> tons of clac/stac instructions for the extended permission testing.
> And the implementation of that was actually fairly non-optimal, so to
> actually get the code I wanted, I had to change how that all worked
> too, using "asm_volatile_goto()".
> 
> Thus both x86 and FS people on the list.
> 
> Comments? This would obviously be a 3.14 issue, I'm not suggesting 
> we'd do this now. I just want to lay the ground-work..
> 
> It's tested in the sense that "it works for me", and profiles look 
> nice, but..

Looks cool - it looks rather similar to the try/catch model Richard 
Henderson came up with many eons ago when we implemented the original 
exception mechanism for Linux, which IIRC we decided wasn't safe due 
to lack of compiler support.

Now we have compiler support ... written by Richard Henderson ;-)

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ