lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHH2K0aMhquqkpbxEWR3CoeDyHyZHViYK3y629U+=Hguo_vgKQ@mail.gmail.com>
Date:   Wed, 11 Apr 2018 00:40:11 +0000
From:   Greg Thelen <gthelen@...gle.com>
To:     Wang Long <wanglong19@...tuan.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Tejun Heo <tj@...nel.org>, npiggin@...il.com,
        LKML <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH v3] writeback: safer lock nesting

On Tue, Apr 10, 2018 at 1:15 AM Wang Long <wanglong19@...tuan.com> wrote:

> > lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if
> > the page's memcg is undergoing move accounting, which occurs when a
> > process leaves its memcg for a new one that has
> > memory.move_charge_at_immigrate set.
> >
> > unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if
the
> > given inode is switching writeback domains.  Switches occur when enough
> > writes are issued from a new domain.
> >
> > This existing pattern is thus suspicious:
> >      lock_page_memcg(page);
> >      unlocked_inode_to_wb_begin(inode, &locked);
> >      ...
> >      unlocked_inode_to_wb_end(inode, locked);
> >      unlock_page_memcg(page);
> >
> > If both inode switch and process memcg migration are both in-flight then
> > unlocked_inode_to_wb_end() will unconditionally enable interrupts while
> > still holding the lock_page_memcg() irq spinlock.  This suggests the
> > possibility of deadlock if an interrupt occurs before
> > unlock_page_memcg().
> >
> >      truncate
> >      __cancel_dirty_page
> >      lock_page_memcg
> >      unlocked_inode_to_wb_begin
> >      unlocked_inode_to_wb_end
> >      <interrupts mistakenly enabled>
> >                                      <interrupt>
> >                                      end_page_writeback
> >                                      test_clear_page_writeback
> >                                      lock_page_memcg
> >                                      <deadlock>
> >      unlock_page_memcg
> >
> > Due to configuration limitations this deadlock is not currently possible
> > because we don't mix cgroup writeback (a cgroupv2 feature) and
> > memory.move_charge_at_immigrate (a cgroupv1 feature).
> >
> > If the kernel is hacked to always claim inode switching and memcg
> > moving_account, then this script triggers lockup in less than a minute:
> >    cd /mnt/cgroup/memory
> >    mkdir a b
> >    echo 1 > a/memory.move_charge_at_immigrate
> >    echo 1 > b/memory.move_charge_at_immigrate
> >    (
> >      echo $BASHPID > a/cgroup.procs
> >      while true; do
> >        dd if=/dev/zero of=/mnt/big bs=1M count=256
> >      done
> >    ) &
> >    while true; do
> >      sync
> >    done &
> >    sleep 1h &
> >    SLEEP=$!
> >    while true; do
> >      echo $SLEEP > a/cgroup.procs
> >      echo $SLEEP > b/cgroup.procs
> >    done
> >
> > Given the deadlock is not currently possible, it's debatable if there's
> > any reason to modify the kernel.  I suggest we should to prevent future
> > surprises.
> This deadlock occurs three times in our environment,

> this deadlock occurs three times in our environment. It is better to cc
stable kernel and
> backport it.

That's interesting.  Are you using cgroup v1 or v2?  Do you enable
memory.move_charge_at_immigrate?
I assume you've been using 4.4 stable.  I'll look closer at it at a 4.4
stable backport.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ