lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 16 Mar 2014 20:13:05 -0700
From:	Sarah Newman <srn@...mr.com>
To:	David Vrabel <david.vrabel@...rix.com>,
	"H. Peter Anvin" <hpa@...or.com>
CC:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>, xen-devel@...ts.xen.org
Subject: Re: [PATCHv1] x86: don't schedule when handling #NM exception

On 03/10/2014 10:15 AM, David Vrabel wrote:
> On 10/03/14 16:40, H. Peter Anvin wrote:
>> On 03/10/2014 09:17 AM, David Vrabel wrote:
>>> math_state_restore() is called from the #NM exception handler.  It may
>>> do a GFP_KERNEL allocation (in init_fpu()) which may schedule.
>>>
>>> Change this allocation to GFP_ATOMIC, but leave all the other callers
>>> of init_fpu() or fpu_alloc() using GFP_KERNEL.
>>
>> And what the [Finnish] do you do if GFP_ATOMIC fails?
> 
> The same thing it used to do -- kill the task with SIGKILL.  I haven't
> changed this behaviour.
> 
>> Sarah's patchset switches Xen PV to use eagerfpu unconditionally, which
>> removes the dependency on #NM and is the right thing to do.
> 
> Ok. I'll wait for this series and not pursue this patch any further.

Sorry, this got swallowed by my mail filter.

I did some more testing and I think eagerfpu is going to noticeably slow things down. When I ran
"time sysbench --num-threads=64 --test=threads run" I saw on the order of 15% more time spent in
system mode and this seemed consistent over different runs.

As for GFP_ATOMIC, unfortunately I don't know a sanctioned test here so I rolled my own. This test
sequentially allocated math-using processes in the background until it could not any more.  On a
64MB instance, I saw 10% fewer processes allocated with GFP_ATOMIC compared to GFP_KERNEL when I
continually allocated new processes up to OOM conditions (256 vs 228.)  A similar test on a
different RFS and a kernel using GFP_NOWAIT showed pretty much no difference in how many processes I
could allocate. This doesn't seem too bad unless there is some kind of fragmentation over time which
would cause worse performance.

Since performance degradation applies at all times and not just under extreme conditions, I think
the lesser evil will actually be GFP_ATOMIC.  But it's not necessary to always use GFP_ATOMIC, only
under certain conditions - IE when the xen PVABI forces us to.

Patches will be supplied shortly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ