lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 9 Jun 2009 09:41:17 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Matthew Von Maszewski <matthew@...hive.com>
Cc:	linux-kernel@...r.kernel.org,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: huge mem mmap eats all CPU when multiple processes

On Mon, 8 Jun 2009 10:27:49 -0400
Matthew Von Maszewski <matthew@...hive.com> wrote:

> [note: not on kernel mailing list, please cc author]
> 
> Symptom:  9 processes mmap same 2 Gig memory section for a shared C  
> heap (lots of random access).  All process begin extreme CPU load in  
> top.
> 
> - Same code works well when only single process access huge mem.
Does this "huge mem" means HugeTLB(2M/4Mbytes) pages ?

> - Code works well with standard vm based mmap file and 9 processes.
> 

What is sys/user ratio in top ? Almost all cpus are used by "sys" ?

> Environment:
> 
> - Intel x86_64:  Dual core Xeon with hyperthreading (4 logical  
> processors)
> - 6 Gig ram, 2.5G allocated to huge mem
by boot option ?

> - tried with kernels 2.6.29.4 and 2.6.30-rc8
> - following mmap() call has base address as NULL on first process,  
> then returned address passed to subsequent processes (not threads,  
> processes)
> 
>             m_MemSize=((m_MemSize/(2048*1024))+1)*2048*1024;
>              m_BaseAddr=mmap(m_File->GetFixedBase(), m_MemSize,
>                              (PROT_READ | PROT_WRITE),
>                              MAP_SHARED, m_File->GetFileId(), m_Offset);
> 
> 
> I am not a kernel hacker so I have not attempted to debug.  Will be  
> able to spend time on a sample program for sharing later today or  
> tomorrow.  Sending this note now in case this is already known.
> 

IIUC, all page faults to hugetlb are serialized by system's mutex. Then, touching
in parallel doesn't do fast job..
Then, I wonder touching all necessary maps by one thread is good, in general.



> Don't suppose this is as simple as a Copy-On-Write flag being set wrong?
> 
I don't think, so.

> Please send notes as to things I need to capture to better describe  
> this bug.  Happy to do the work.
> 
Add cc to linux-mm.

Thanks,
-Kame


> Thanks,
> Matthew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ