lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ed46ef09-7766-eb80-a4ad-4c72d8dba188@oracle.com>
Date:   Thu, 7 Nov 2019 13:49:55 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Matthew Wilcox <willy@...radead.org>,
        Waiman Long <longman@...hat.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Davidlohr Bueso <dave@...olabs.net>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>
Subject: Re: [PATCH] hugetlbfs: Take read_lock on i_mmap for PMD sharing

On 11/7/19 11:54 AM, Matthew Wilcox wrote:
> Are there other current users of the write lock that could use a read lock?
> At first blush, it would seem that unmap_ref_private() also only needs
> a read lock on the i_mmap tree.  I don't think hugetlb_change_protection()
> needs the write lock either.  Nor retract_page_tables().

I believe that the semaphore still needs to be held in write mode while
calling huge_pmd_unshare (as is done in the call sites above).  Why?
There is this check for sharing in huge_pmd_unshare,

	if (page_count(virt_to_page(ptep)) == 1)
		return 0;	// implies no sharing

Note that huge_pmd_share now increments the page count with the semaphore
held just in read mode.  It is OK to do increments in parallel without
synchronization.  However, we don't want anyone else changing the count
while that check in huge_pmd_unshare is happening.  Hence, the need for
taking the semaphore in write mode.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ