[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140620174533.GA9635@node.dhcp.inet.fi>
Date: Fri, 20 Jun 2014 20:45:33 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Michal Nazarewicz <mina86@...a86.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
Zhang Yanfei <zhangyanfei@...fujitsu.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 01/13] mm, THP: don't hold mmap_sem in khugepaged when
allocating THP
On Fri, Jun 20, 2014 at 05:49:31PM +0200, Vlastimil Babka wrote:
> When allocating huge page for collapsing, khugepaged currently holds mmap_sem
> for reading on the mm where collapsing occurs. Afterwards the read lock is
> dropped before write lock is taken on the same mmap_sem.
>
> Holding mmap_sem during whole huge page allocation is therefore useless, the
> vma needs to be rechecked after taking the write lock anyway. Furthemore, huge
> page allocation might involve a rather long sync compaction, and thus block
> any mmap_sem writers and i.e. affect workloads that perform frequent m(un)map
> or mprotect oterations.
>
> This patch simply releases the read lock before allocating a huge page. It
> also deletes an outdated comment that assumed vma must be stable, as it was
> using alloc_hugepage_vma(). This is no longer true since commit 9f1b868a13
> ("mm: thp: khugepaged: add policy for finding target node").
There is no point in touching ->mmap_sem in khugepaged_alloc_page() at
all. Please, move up_read() outside khugepaged_alloc_page().
--
Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists