lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <50EF8614.3050408@windriver.com> Date: Fri, 11 Jan 2013 11:25:08 +0800 From: Fan Du <fan.du@...driver.com> To: Andrew Morton <akpm@...ux-foundation.org> CC: <matthew@....cx>, <linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] fs: Disable preempt when acquire i_size_seqcount write lock On 2013年01月11日 06:38, Andrew Morton wrote: > On Wed, 9 Jan 2013 11:34:19 +0800 > Fan Du<fan.du@...driver.com> wrote: > >> Two rt tasks bind to one CPU core. >> >> The higher priority rt task A preempts a lower priority rt task B which >> has already taken the write seq lock, and then the higher priority >> rt task A try to acquire read seq lock, it's doomed to lockup. >> >> rt task A with lower priority: call write >> i_size_write rt task B with higher priority: call sync, and preempt task A >> write_seqcount_begin(&inode->i_size_seqcount); i_size_read >> inode->i_size = i_size; read_seqcount_begin<-- lockup here... >> > > Ouch. > > And even if the preemping task is preemptible, it will spend an entire > timeslice pointlessly spinning, which isn't very good. > >> So disable preempt when acquiring every i_size_seqcount *write* lock will >> cure the problem. >> >> ... >> >> --- a/include/linux/fs.h >> +++ b/include/linux/fs.h >> @@ -758,9 +758,11 @@ static inline loff_t i_size_read(const struct inode *inode) >> static inline void i_size_write(struct inode *inode, loff_t i_size) >> { >> #if BITS_PER_LONG==32&& defined(CONFIG_SMP) >> + preempt_disable(); >> write_seqcount_begin(&inode->i_size_seqcount); >> inode->i_size = i_size; >> write_seqcount_end(&inode->i_size_seqcount); >> + preempt_enable(); >> #elif BITS_PER_LONG==32&& defined(CONFIG_PREEMPT) >> preempt_disable(); >> inode->i_size = i_size; > > afacit all write_seqcount_begin()/read_seqretry() sites are vulnerable > to this problem. Would it not be better to do the preempt_disable() in > write_seqcount_begin()? IMHO, write_seqcount_begin/write_seqcount_end are often wrapped by mutex, this gives higher priority task a chance to sleep, and then lower priority task get cpu to unlock, so avoid the problematic scenario this patch describing. But in i_size_write case, I could only find disable preempt a good choice before someone else has better idea :) > > Possible problems: > > - mm/filemap_xip.c does disk I/O under write_seqcount_begin(). > > - dev_change_name() does GFP_KERNEL allocations under write_seqcount_begin() > > - I didn't review u64_stats_update_begin() callers. > > But I think calling schedule() under preempt_disable() is OK anyway? > -- 浮沉随浪只记今朝笑 --fan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists