lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Aug 2014 18:01:31 +0300
From:	"Kirill A. Shutemov" <kirill@...temov.name>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Oren Twaig <oren@...lemp.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	linux-mm@...ck.org,
	"Shai Fultheim (Shai@...leMP.com)" <Shai@...lemp.com>
Subject: Re: x86: vmalloc and THP

On Tue, Aug 12, 2014 at 05:28:52AM -0700, Eric Dumazet wrote:
> On Tue, 2014-08-12 at 09:07 +0300, Kirill A. Shutemov wrote:
> > On Tue, Aug 12, 2014 at 08:00:54AM +0300, Oren Twaig wrote:
> > >If not, is there any fast way to change this behavior ? Maybe by
> > >changing the granularity/alignment of such allocations to allow such
> > >mapping ?
> > 
> > What's the point to use vmalloc() in this case?
> 
> Look at various large hashes we have in the system, all using
> vmalloc() :
> 
> [    0.006856] Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes)
> [    0.033130] Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes)
> [    1.197621] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)

I see lower-order allocation in upstream code. Is it some distribution
tweak?

> I would imagine a performance difference if we were using hugepages.

Okay, it's *probably* a valid point.

The hash tables are only allocated with vmalloc() on NUMA system, if
hashdist=1 (default on NUMA).  It does it to distribute memory between
nodes. vmalloc() in NUMA_NO_NODE case will allocate all memory with
0-order page allocations: no physical contiguous memory for hugepage
mappings.

I guess we could teach vmalloc() to interleave between nodes on PMD_SIZE
chunks rather then on PAGE_SIZE if caller asks for big memory allocations.
Although, I'm not sure it it would fit all vmalloc() users.

We also would need to allocate PMD_SIZE-aligned virtual address range
to be able to mapped allocated memory with pmds.

It's *potentially* interesting research project. Any volunteers?

-- 
 Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists