lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Apr 2018 14:31:36 +0800
From:   Wang Long <wanglong19@...tuan.com>
To:     Greg Thelen <gthelen@...gle.com>, Michal Hocko <mhocko@...nel.org>
Cc:     Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
        akpm@...l.org, LKML <linux-kernel@...r.kernel.org>,
        npiggin@...il.com
Subject: Re: [RFC] Is it correctly that the usage for spin_{lock|unlock}_irq
 in clear_page_dirty_for_io



On 4/4/2018 7:12 AM, Greg Thelen wrote:
> On Tue, Apr 3, 2018 at 5:03 AM Michal Hocko <mhocko@...nel.org> wrote:
>
>> On Mon 02-04-18 19:50:50, Wang Long wrote:
>>> Hi,  Johannes Weiner and Tejun Heo
>>>
>>> I use linux-4.4.y to test the new cgroup controller io and the current
>>> stable kernel linux-4.4.y has the follow logic
>>>
>>>
>>> int clear_page_dirty_for_io(struct page *page){
>>> ...
>>> ...
>>>                  memcg = mem_cgroup_begin_page_stat(page); ----------(a)
>>>                  wb = unlocked_inode_to_wb_begin(inode, &locked);
> ---------(b)
>>>                  if (TestClearPageDirty(page)) {
>>>                          mem_cgroup_dec_page_stat(memcg,
> MEM_CGROUP_STAT_DIRTY);
>>>                          dec_zone_page_state(page, NR_FILE_DIRTY);
>>>                          dec_wb_stat(wb, WB_RECLAIMABLE);
>>>                          ret =1;
>>>                  }
>>>                  unlocked_inode_to_wb_end(inode, locked); -----------(c)
>>>                  mem_cgroup_end_page_stat(memcg); -------------(d)
>>>                  return ret;
>>> ...
>>> ...
>>> }
>>>
>>>
>>> when memcg is moving, and I_WB_SWITCH flags for inode is set. the logic
>>> is the following:
>>>
>>>
>>> spin_lock_irqsave(&memcg->move_lock, flags); -------------(a)
>>>          spin_lock_irq(&inode->i_mapping->tree_lock); ------------(b)
>>>          spin_unlock_irq(&inode->i_mapping->tree_lock); -----------(c)
>>> spin_unlock_irqrestore(&memcg->move_lock, flags); -----------(d)
>>>
>>>
>>> after (c) , the local irq is enabled. I think it is not correct.
>>>
>>> We get a deadlock backtrace after (c), the cpu get an softirq and in the
>>> irq it also call mem_cgroup_begin_page_stat to lock the same
>>> memcg->move_lock.
>>>
>>> Since the conditions are too harsh, this scenario is difficult to
>>> reproduce.  But it really exists.
>>>
>>> So how about change (b) (c) to spin_lock_irqsave/spin_lock_irqrestore?
>> Yes, it seems we really need this even for the current tree. Please note
>> that At least clear_page_dirty_for_io doesn't lock memcg anymore.
>> __cancel_dirty_page still uses lock_page_memcg though (former
>> mem_cgroup_begin_page_stat).
>> --
>> Michal Hocko
>> SUSE Labs
> I agree the issue looks real in 4.4 stable and upstream.  It seems like
> unlocked_inode_to_wb_begin/_end should use spin_lock_irqsave/restore.
>
> I'm testing a little patch now.
Thanks.

When fix it on upstream. The longterm kernel 4.9 and 4.14 also need to fix.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ