[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220322002653.33865-3-kuniyu@amazon.co.jp>
Date: Tue, 22 Mar 2022 09:26:53 +0900
From: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
To: Al Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Kuniyuki Iwashima <kuniyu@...zon.co.jp>,
Kuniyuki Iwashima <kuni1840@...il.com>,
<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<syzbot+bdd6e38a1ed5ee58d8bd@...kaller.appspotmail.com>,
"Soheil Hassas Yeganeh" <soheil@...gle.com>,
Davidlohr Bueso <dave@...olabs.net>,
"Sridhar Samudrala" <sridhar.samudrala@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>
Subject: [PATCH 2/2] list: Fix a data-race around ep->rdllist.
ep_poll() first calls ep_events_available() with no lock held and
checks if ep->rdllist is empty by list_empty_careful(), which reads
rdllist->prev. Thus all accesses to it need some protection to avoid
store/load-tearing.
Note INIT_LIST_HEAD_RCU() already has the annotation for both prev
and next.
Commit bf3b9f6372c4 ("epoll: Add busy poll support to epoll with
socket fds.") added the first lockless ep_events_available(), and
commit c5a282e9635e ("fs/epoll: reduce the scope of wq lock in
epoll_wait()") made some ep_events_available() calls lockless and
added single call under a lock, finally commit e59d3c64cba6 ("epoll:
eliminate unnecessary lock for zero timeout") made the last
ep_events_available() lockless.
BUG: KCSAN: data-race in do_epoll_wait / do_epoll_wait
write to 0xffff88810480c7d8 of 8 bytes by task 1802 on cpu 0:
INIT_LIST_HEAD include/linux/list.h:38 [inline]
list_splice_init include/linux/list.h:492 [inline]
ep_start_scan fs/eventpoll.c:622 [inline]
ep_send_events fs/eventpoll.c:1656 [inline]
ep_poll fs/eventpoll.c:1806 [inline]
do_epoll_wait+0x4eb/0xf40 fs/eventpoll.c:2234
do_epoll_pwait fs/eventpoll.c:2268 [inline]
__do_sys_epoll_pwait fs/eventpoll.c:2281 [inline]
__se_sys_epoll_pwait+0x12b/0x240 fs/eventpoll.c:2275
__x64_sys_epoll_pwait+0x74/0x80 fs/eventpoll.c:2275
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
read to 0xffff88810480c7d8 of 8 bytes by task 1799 on cpu 1:
list_empty_careful include/linux/list.h:329 [inline]
ep_events_available fs/eventpoll.c:381 [inline]
ep_poll fs/eventpoll.c:1797 [inline]
do_epoll_wait+0x279/0xf40 fs/eventpoll.c:2234
do_epoll_pwait fs/eventpoll.c:2268 [inline]
__do_sys_epoll_pwait fs/eventpoll.c:2281 [inline]
__se_sys_epoll_pwait+0x12b/0x240 fs/eventpoll.c:2275
__x64_sys_epoll_pwait+0x74/0x80 fs/eventpoll.c:2275
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
value changed: 0xffff88810480c7d0 -> 0xffff888103c15098
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 1799 Comm: syz-fuzzer Tainted: G W 5.17.0-rc7-syzkaller-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Fixes: e59d3c64cba6 ("epoll: eliminate unnecessary lock for zero timeout")
Fixes: c5a282e9635e ("fs/epoll: reduce the scope of wq lock in epoll_wait()")
Fixes: bf3b9f6372c4 ("epoll: Add busy poll support to epoll with socket fds.")
Reported-by: syzbot+bdd6e38a1ed5ee58d8bd@...kaller.appspotmail.com
Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
---
CC: Soheil Hassas Yeganeh <soheil@...gle.com>
CC: Davidlohr Bueso <dave@...olabs.net>
CC: Sridhar Samudrala <sridhar.samudrala@...el.com>
CC: Alexander Duyck <alexander.h.duyck@...el.com>
---
include/linux/list.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/list.h b/include/linux/list.h
index dd6c2041d..d7d2bfa1a 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -35,7 +35,7 @@
static inline void INIT_LIST_HEAD(struct list_head *list)
{
WRITE_ONCE(list->next, list);
- list->prev = list;
+ WRITE_ONCE(list->prev, list);
}
#ifdef CONFIG_DEBUG_LIST
@@ -306,7 +306,7 @@ static inline int list_empty(const struct list_head *head)
static inline void list_del_init_careful(struct list_head *entry)
{
__list_del_entry(entry);
- entry->prev = entry;
+ WRITE_ONCE(entry->prev, entry);
smp_store_release(&entry->next, entry);
}
@@ -326,7 +326,7 @@ static inline void list_del_init_careful(struct list_head *entry)
static inline int list_empty_careful(const struct list_head *head)
{
struct list_head *next = smp_load_acquire(&head->next);
- return list_is_head(next, head) && (next == head->prev);
+ return list_is_head(next, head) && (next == READ_ONCE(head->prev));
}
/**
--
2.30.2
Powered by blists - more mailing lists