lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 15 Dec 2021 04:19:03 +0000
From:   Matthew Wilcox <willy@...radead.org>
To:     syzbot <syzbot+c915885f05d8e432e7b4@...kaller.appspotmail.com>
Cc:     akpm@...ux-foundation.org, dhowells@...hat.com, hughd@...gle.com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] possible deadlock in split_huge_page_to_list

On Tue, Dec 14, 2021 at 05:03:26PM -0800, syzbot wrote:
> commit 3ebffc96befbaf9de9297b00d67091bb702fad8e
> Author: Matthew Wilcox (Oracle) <willy@...radead.org>
> Date:   Sun Jun 28 02:19:08 2020 +0000
> 
>     mm: Use multi-index entries in the page cache
> 
> bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=1276e4bab00000
> final oops:     https://syzkaller.appspot.com/x/report.txt?x=1176e4bab00000
> console output: https://syzkaller.appspot.com/x/log.txt?x=1676e4bab00000

Well, this is all entirely plausible:

+               xas_split_alloc(&xas, head, compound_order(head),
+                               mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK);

It looks like I can fix this by moving the memory allocation before
the acquisition of the i_mmap_lock.  Any objections to this:

+++ b/mm/huge_memory.c
@@ -2653,6 +2653,13 @@ int split_huge_page_to_list(struct page *page, struct lis
t_head *list)
                        goto out;
                }

+               xas_split_alloc(&xas, head, compound_order(head),
+                               mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK);
+               if (xas_error(&xas)) {
+                       ret = xas_error(&xas);
+                       goto out;
+               }
+
                anon_vma = NULL;
                i_mmap_lock_read(mapping);

@@ -2679,15 +2686,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)

        unmap_page(head);

-       if (mapping) {
-               xas_split_alloc(&xas, head, compound_order(head),
-                               mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK);
-               if (xas_error(&xas)) {
-                       ret = xas_error(&xas);
-                       goto out_unlock;
-               }
-       }
-
        /* block interrupt reentry in xa_lock and spinlock */
        local_irq_disable();
        if (mapping) {

(relative to the above patch)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ