lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 19 Mar 2016 04:01:01 +0300
From:	"Kirill A. Shutemov" <kirill@...temov.name>
To:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc:	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dave Hansen <dave.hansen@...el.com>,
	Vlastimil Babka <vbabka@...e.cz>,
	Christoph Lameter <cl@...two.org>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Jerome Marchand <jmarchan@...hat.com>,
	Yang Shi <yang.shi@...aro.org>,
	Sasha Levin <sasha.levin@...cle.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCHv4 04/25] rmap: support file thp

On Fri, Mar 18, 2016 at 03:10:06PM +0530, Aneesh Kumar K.V wrote:
> "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> writes:
> 
> > [ text/plain ]
> > Naive approach: on mapping/unmapping the page as compound we update
> > ->_mapcount on each 4k page. That's not efficient, but it's not obvious
> > how we can optimize this. We can look into optimization later.
> >
> > PG_double_map optimization doesn't work for file pages since lifecycle
> > of file pages is different comparing to anon pages: file page can be
> > mapped again at any time.
> >
> 
> Can you explain this more ?. We added PG_double_map so that we can keep
> page_remove_rmap simpler. So if it isn't a compound page we still can do
> 
> 	if (!atomic_add_negative(-1, &page->_mapcount))
> 
> I am trying to understand why we can't use that with file pages ?

The first thing: for non-compound pages we still have simple
atomic_inc_and_test() / atomic_add_negative(-1), nothing changed here.

About compound pages:

For anon-THP PG_double_map allowed to not touch _mapcount in all subpages
until a PMD which maps the page is split.  This way we significantly lower
overhead on refcounting as long as we have the page mapped with PMD-only,
since we only need to increment compound_mapcount().

The optimization is possible due to relatively simple lifecycle of
anonymous THP page:

  - anon-THPs always mapped with PMD first;

  - new mapping of THP can only be created via fork();

  - the page only can get mapped with PTEs via split_huge_pmd();

For file-THP the situation is different. Once we allocated a huge page and
put it on radix tree, the page can be mapped with PTEs or PMDs at any
time. It makes the same optimization inapplicable there.

I think there *can* be some room for optimization, but I don't want to
invest more time here, until it's identified as bottleneck. It can lead to
more complex code on rmap side.

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ