lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 26 Mar 2008 15:59:38 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	Paul Menage <menage@...gle.com>
CC:	balbir@...ux.vnet.ibm.com, linux-mm@...ck.org,
	Hugh Dickins <hugh@...itas.com>,
	Sudhir Kumar <skumar@...ux.vnet.ibm.com>,
	YAMAMOTO Takashi <yamamoto@...inux.co.jp>, lizf@...fujitsu.com,
	linux-kernel@...r.kernel.org, taka@...inux.co.jp,
	David Rientjes <rientjes@...gle.com>,
	Pavel Emelianov <xemul@...nvz.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [RFC][-mm] Memory controller add mm->owner

Balbir Singh wrote:
> Paul Menage wrote:
>> On Mon, Mar 24, 2008 at 10:33 AM, Balbir Singh
>> <balbir@...ux.vnet.ibm.com> wrote:
>>>  > OK, so we don't need to handle this for NPTL apps - but for anything
>>>  > still using LinuxThreads or manually constructed clone() calls that
>>>  > use CLONE_VM without CLONE_PID, this could still be an issue.
>>>
>>>  CLONE_PID?? Do you mean CLONE_THREAD?
>> Yes, sorry - CLONE_THREAD.
>>
>>>  For the case you mentioned, mm->owner is a moving target and we don't want to
>>>  spend time finding the successor, that can be expensive when threads start
>>>  exiting one-by-one quickly and when the number of threads are high. I wonder if
>>>  there is an efficient way to find mm->owner in that case.
>>>
>> But:
>>
>> - running a high-threadcount LinuxThreads process is by definition
>> inefficient and expensive (hence the move to NPTL)
>>
>> - any potential performance hit is only paid at exit time
>>
>> - in the normal case, any of your children or one of your siblings
>> will be a suitable alternate owner
>>
>> - in the worst case, it's not going to be worse than doing a
>> for_each_thread() loop
>>

This will have to be the common case, since you never know what combination of
clone calls did CLONE_VM and what did CLONE_THREAD. At exit time, we need to pay
a for_each_process() overhead. Although very unlikely, an application can call
pthread_* functions (NPTL) and then do a clone with CLONE_VM, thus forcing
threads in a thread group and another process to share the mm_struct. This makes
mm->owner struct approach hard to implement.

>> so I don't think this would be a major problem
>>
> 
> I've been looking at zap_threads, I suspect we'll end up implementing a similar
> loop, which makes me very uncomfortable. Adding code for the least possible
> scenario. It will not get invoked for CLONE_THREAD, but will get invoked for the
> case when CLONE_VM is set without CLONE_THREAD.
> 
> I'll try and experiment a bit more and see what I come up with

I am yet to benchmark the cost of doing for_each_process() on every exit. I
suspect, we'll see a big drop in performance. I am not sure anymore if mm->owner
is worth the overhead.


-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ