lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54067486.2000105@huawei.com>
Date:	Wed, 3 Sep 2014 09:53:10 +0800
From:	Xue jiufei <xuejiufei@...wei.com>
To:	Dave Chinner <david@...morbit.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
	<linux-fsdevel@...r.kernel.org>,
	"ocfs2-devel@....oracle.com" <ocfs2-devel@....oracle.com>,
	Junxiao Bi <junxiao.bi@...cle.com>
Subject: Re: [PATCH] fs/super.c: do not shrink fs slab during direct memory
 reclaim

Hi, Dave
On 2014/9/3 9:02, Dave Chinner wrote:
> On Tue, Sep 02, 2014 at 05:03:27PM +0800, Xue jiufei wrote:
>> Hi, Dave
>> On 2014/9/2 7:51, Dave Chinner wrote:
>>> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote:
>>>> The patch trys to solve one deadlock problem caused by cluster
>>>> fs, like ocfs2. And the problem may happen at least in the below
>>>> situations:
>>>> 1)Receiving a connect message from other nodes, node queues a
>>>> work_struct o2net_listen_work.
>>>> 2)o2net_wq processes this work and calls sock_alloc() to allocate
>>>> memory for a new socket.
>>>> 3)It would do direct memory reclaim when available memory is not
>>>> enough and trigger the inode cleanup. That inode being cleaned up
>>>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode()
>>>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(),
>>>> and wait for the unlock response from master.
>>>> 4)tcp layer received the response, call o2net_data_ready() and
>>>> queue sc_rx_work, waiting o2net_wq to process this work.
>>>> 5)o2net_wq is a single thread workqueue, it process the work one by
>>>> one. Right now it is still doing o2net_listen_work and cannot handle
>>>> sc_rx_work. so we deadlock.
>>>>
>>>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc().
>>>> So we use PF_FSTRANS to avoid the task reentering filesystem when
>>>> available memory is not enough.
>>>>
>>>> Signed-off-by: joyce.xue <xuejiufei@...wei.com>
>>>
>>> For the second time: use memalloc_noio_save/memalloc_noio_restore.
>>> And please put a great big comment in the code explaining why you
>>> need to do this special thing with memory reclaim flags.
>>>
>>> Cheers,
>>>
>>> Dave.
>>>
>> Thanks for your reply. But I am afraid that memalloc_noio_save/
>> memalloc_noio_restore can not solve my problem. __GFP_IO is cleared
>> if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory
>> reclaim.
> 
> Well, yes. It sets a process flag that is used to avoid re-entrancy
> issues in direct reclaim. Direct reclaim is more than just the
> superblock shrinker - there are lots of other shrinkers, page
> reclaim, etc and I bet there are other paths that can trigger the
> deadlock you are seeing. We need to protect against all those
> cases, not just the one shrinker you see a problem with. i.e. we
> need to clear __GPF_FS from *all* reclaim, not just the superblock
> shrinker.
> 
> Also, PF_FSTRANS is used internally by filesystems, not the
> generic code.  If we start spreading it through generic code like
> this, we start breaking filesystems that rely on it having a
> specific, filesystem internal meaning.  So it's a NACK on that basis
> as well.
> 
>> However, __GFP_FS is still set that can not avoid pruning
>> dcache and icache in memory allocation, resulting in the deadlock I
>> described.
> 
> You have a deadlock in direct reclaim, and we already have a
> template for setting a process flag that is used to indirectly
> control direct reclaim behaviour. If the current process flag
> doesn't provide precisely the coverage, then use that implementation
> as the template to do exactly what is needed for your case.
> 
> Cheers,
> 
> Dave.
> 
Thanks very much for your advise. I will send another patch later.

Thanks,
Xuejiufei

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ