lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20201031113500.188798721@linuxfoundation.org> Date: Sat, 31 Oct 2020 12:35:46 +0100 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: linux-kernel@...r.kernel.org Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org, syzbot+27c12725d8ff0bfe1a13@...kaller.appspotmail.com, "Matthew Wilcox (Oracle)" <willy@...radead.org>, Jens Axboe <axboe@...nel.dk> Subject: [PATCH 5.8 14/70] io_uring: Fix use of XArray in __io_uring_files_cancel From: "Matthew Wilcox (Oracle)" <willy@...radead.org> commit ce765372bc443573d1d339a2bf4995de385dea3a upstream. We have to drop the lock during each iteration, so there's no advantage to using the advanced API. Convert this to a standard xa_for_each() loop. Reported-by: syzbot+27c12725d8ff0bfe1a13@...kaller.appspotmail.com Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org> Signed-off-by: Jens Axboe <axboe@...nel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- fs/io_uring.c | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -8008,28 +8008,19 @@ static void io_uring_attempt_task_drop(s void __io_uring_files_cancel(struct files_struct *files) { struct io_uring_task *tctx = current->io_uring; - XA_STATE(xas, &tctx->xa, 0); + struct file *file; + unsigned long index; /* make sure overflow events are dropped */ tctx->in_idle = true; - do { - struct io_ring_ctx *ctx; - struct file *file; - - xas_lock(&xas); - file = xas_next_entry(&xas, ULONG_MAX); - xas_unlock(&xas); - - if (!file) - break; - - ctx = file->private_data; + xa_for_each(&tctx->xa, index, file) { + struct io_ring_ctx *ctx = file->private_data; io_uring_cancel_task_requests(ctx, files); if (files) io_uring_del_task_file(file); - } while (1); + } } static inline bool io_uring_task_idle(struct io_uring_task *tctx)
Powered by blists - more mailing lists