lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 16 May 2012 00:37:02 -0400
From:	Doug Ledford <dledford@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Sasha Levin <levinsasha928@...il.com>,
	kosaki.motohiro@...fujitsu.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ipc/mqueue: use correct gfp flags in msg_insert

On 5/15/2012 5:38 PM, Andrew Morton wrote:
> On Mon, 14 May 2012 22:45:57 -0400
> Doug Ledford <dledford@...hat.com> wrote:

>>> Switching to GFP_ATOMIC is a bit regrettable.  Can we avoid this by
>>> speculatively allocating the memory before taking the lock, then free
>>> it again if we ended up not using it?
>>
>> Not really, we take the lock in a different function than this and would
>> have to pass around a node struct and then free it if we didn't use it.
>>  I mean, it could be done, but it would fugly the calls around this up.
>>  The msg_insert() routine is called in two places.  In one place, the
>> lock is taken right there so you could allocate before and then call.
>> In the other, it is another function called with the lock held so now
>> you would have to pass the possible mem allocation around two functions.
>>  Doable, but ugly.
> 
> Well, it's not *too* bad: just pass a void** around, zero it if it got
> used, then unconditionally free it.  Nice and easy.
> 
> But in a contest between source-level beauty and runtime robustness,
> robustness surely wins?

Sure, things must be robustness.  It's just a question of what qualifies
as robust.

>>  On the other hand, this is a small struct that
>> should be coming off one of the small size kmem cache pools (4 pointers
>> total, a long, and an int, so kmalloc-32 or kmalloc-64 depending on
>> arch).  That doesn't seem like a likely candidate to fail if there is
>> memory pressure, especially considering that immediately prior to taking
>> the lock we call kmalloc with GFP_KERNEL (as part of load_msg()) and so
>> we should either not be under serious memory pressure or we would have
>> slept waiting for it to ease up.
> 
> True.  But weaselly!

And we don't rely on weaselly things in the kernel all the time? ;-)

>> I think I can imagine a better way to do this though as part of the
>> whole request to cache at least one rbnode entry so we get the 0 message
>> performance of the queue back.  I'll send that patch through once I've
>> verified it does what I think it will.
> 
> That sounds good.

I coded something up, but I'm not happy with it.  Trying again.  One of
my problems is I *really* don't like speculatively allocating this
little struct because in the common case this is a total waste of CPU
cycles.  While my patch sped up operations under any sort of decent
queue depth, it slowed down the queue empty case by 100ns.  I'm trying
to get some of that loss back, and doing a useless speculative
allocation doesn't help :-/

-- 
Doug Ledford <dledford@...hat.com>
              GPG KeyID: 0E572FDD
	      http://people.redhat.com/dledford

Infiniband specific RPMs available at
	      http://people.redhat.com/dledford/Infiniband


Download attachment "signature.asc" of type "application/pgp-signature" (899 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ