lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bfe5a925-69ce-46af-a720-14e1d2fd30b5@linux.dev>
Date: Tue, 11 Nov 2025 00:39:29 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Harry Yoo <harry.yoo@...cle.com>
Cc: akpm@...ux-foundation.org,
 syzbot+3f5f9a0d292454409ca6@...kaller.appspotmail.com,
 syzbot+ci5a676d3d210999ee@...kaller.appspotmail.com, david@...hat.com,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, muchun.song@...ux.dev,
 osalvador@...e.de, syzkaller-bugs@...glegroups.com, syzbot@...ts.linux.dev,
 syzbot@...kaller.appspotmail.com
Subject: Re: [PATCH v2 1/1] mm/hugetlb: fix possible deadlocks in hugetlb VMA
 unmap paths



On 2025/11/10 20:17, Harry Yoo wrote:
> On Mon, Nov 10, 2025 at 07:15:53PM +0800, Lance Yang wrote:
>> From: Lance Yang <lance.yang@...ux.dev>
>>
>> The hugetlb VMA unmap path contains several potential deadlocks, as
>> reported by syzbot. These deadlocks occur in __hugetlb_zap_begin(),
>> move_hugetlb_page_tables(), and the retry path of
>> hugetlb_unmap_file_folio() (affecting remove_inode_hugepages() and
>> unmap_vmas()), where vma_lock is acquired before i_mmap_lock. This lock
>> ordering conflicts with other paths like hugetlb_fault(), which establish
>> the correct dependency as i_mmap_lock -> vma_lock.
>>
>> Possible unsafe locking scenario:
>>
>> CPU0                                 CPU1
>> ----                                 ----
>> lock(&vma_lock->rw_sema);
>>                                       lock(&i_mmap_lock);
>>                                       lock(&vma_lock->rw_sema);
>> lock(&i_mmap_lock);
>>
>> Resolve the circular dependencies reported by syzbot across multiple call
>> chains by reordering the locks in all conflicting paths to consistently
>> follow the established i_mmap_lock -> vma_lock order.
> 
> But mm/rmap.c says:
>> * hugetlbfs PageHuge() take locks in this order:
>> *   hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
>> *     vma_lock (hugetlb specific lock for pmd_sharing)
>> *       mapping->i_mmap_rwsem (also used for hugetlb pmd sharing)
>> *         folio_lock
>> */

Thanks! You are right, I was mistaken ...

> 
> I think the commit message should explain why the locking order described
> above is incorrect (or when it became incorrect) and fix the comment?

I think the locking order documented in mm/rmap.c (vma_lock -> i_mmap_lock)
is indeed the correct one to follow.

This fix has it backwards then. I'll rework it to fix the actual violations.

Thanks,
Lance

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ