lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 26 Feb 2019 17:20:50 +0100 From: Oleg Nesterov <oleg@...hat.com> To: Ravi Bangoria <ravi.bangoria@...ux.ibm.com> Cc: Sherry Yang <sherryy@...roid.com>, Michal Hocko <mhocko@...e.com>, srikar@...ux.vnet.ibm.com, songliubraving@...com, peterz@...radead.org, mingo@...hat.com, acme@...nel.org, alexander.shishkin@...ux.intel.com, jolsa@...hat.com, namhyung@...nel.org, linux-kernel@...r.kernel.org, aneesh.kumar@...ux.ibm.com, syzbot+1068f09c44d151250c33@...kaller.appspotmail.com Subject: Re: [PATCH] Uprobes: Fix deadlock between delayed_uprobe_lock and fs_reclaim On 02/26, Ravi Bangoria wrote: > > On 2/8/19 2:03 PM, Ravi Bangoria wrote: > > > > > > On 2/6/19 7:06 PM, Oleg Nesterov wrote: > >> Ravi, I am on vacation till the end of this week, can't read your patch > >> carefully. > >> > >> I am not sure I fully understand the problem, but shouldn't we change > >> binder_alloc_free_page() to use mmput_async() ? Like it does if trylock > >> fails. > > > > I don't understand binderfs code much so I'll let Sherry comment on this. > > Sherry, Can you please comment on this. Yes, please. I don't even know what android/binder does, but I think we need the trivial patch below. If nothing else, any reason why err_down_write_mmap_sem_failed needs _async should equally apply to the case when down_write_trylock() succeeds, no? And. This lockdep report basically means that (without this change) kmalloc is not safe under any lock which can be taken in __mmput() path, I don't think we should blame or change uprobes.c. Oleg. --- x/drivers/android/binder_alloc.c +++ x/drivers/android/binder_alloc.c @@ -981,7 +981,7 @@ enum lru_status binder_alloc_free_page(s trace_binder_unmap_user_end(alloc, index); up_write(&mm->mmap_sem); - mmput(mm); + mmput_async(mm); } trace_binder_unmap_kernel_start(alloc, index);
Powered by blists - more mailing lists