[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130826112946.GD27005@ZenIV.linux.org.uk>
Date: Mon, 26 Aug 2013 12:29:47 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Chuansheng Liu <chuansheng.liu@...el.com>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Fix the race between the fget() and close()
On Tue, Aug 27, 2013 at 12:12:49AM +0800, Chuansheng Liu wrote:
>
> When one thread is calling sys_ioctl(), and another thread is calling
> sys_close(), current code has protected most cases.
>
> But for the below case, it will cause issue:
> T1 T2 T3
> sys_close(oldfile) sys_open(newfile) sys_ioctl(oldfile)
> -> __close_fd()
> lock file_lock
> assign NULL file
> put fd to be unused
> unlock file_lock
> get new fd is same as old
> assign newfile to same fd
> fget_flight()
> get the newfile!!!
> decrease file->f_count
> file->f_count == 0
> --> try to release file
>
> The race is when T1 try to close one oldFD, T3 is trying to ioctl the oldFD,
> if currently the new T2 is trying to open a newfile, it maybe get the newFD is
> same as oldFD.
>
> And normal case T3 should get NULL file pointer due to released by T1, but T3
> get the newfile pointer, and continue the ioctl accessing.
>
> It maybe causes unexpectable error, we hit one system panic at do_vfs_ioctl().
>
> Here we can fix it that putting "put_unused_fd()" after filp_close(),
> it can avoid this case.
NAK. T3 getting the new file is valid (think what happens if T1 returns from
close() before T2 enters open() and T3 hits ioctl() after both of those),
the userland code is, at the very least, racy and no, moving put_unused_fd()
around is not going to solve any problems - it might shift the race window,
but that's it.
It certainly does not affect the possibility of panics in do_vfs_ioctl()
you are seeing and I would really like to see the details on those instead of
this kind of voodoo "fixes".
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists