[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZjoGJH1CEk+f+U7n@gmail.com>
Date: Tue, 7 May 2024 03:44:52 -0700
From: Breno Leitao <leitao@...ian.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: Pavel Begunkov <asml.silence@...il.com>, leit@...a.com,
"open list:IO_URING" <io-uring@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] io_uring/io-wq: Use set_bit() and test_bit() at
worker->flags
On Fri, May 03, 2024 at 12:32:38PM -0600, Jens Axboe wrote:
> On 5/3/24 11:37 AM, Breno Leitao wrote:
> > Utilize set_bit() and test_bit() on worker->flags within io_uring/io-wq
> > to address potential data races.
> >
> > The structure io_worker->flags may be accessed through parallel data
> > paths, leading to concurrency issues. When KCSAN is enabled, it reveals
> > data races occurring in io_worker_handle_work and
> > io_wq_activate_free_worker functions.
> >
> > BUG: KCSAN: data-race in io_worker_handle_work / io_wq_activate_free_worker
> > write to 0xffff8885c4246404 of 4 bytes by task 49071 on cpu 28:
> > io_worker_handle_work (io_uring/io-wq.c:434 io_uring/io-wq.c:569)
> > io_wq_worker (io_uring/io-wq.c:?)
> > <snip>
> >
> > read to 0xffff8885c4246404 of 4 bytes by task 49024 on cpu 5:
> > io_wq_activate_free_worker (io_uring/io-wq.c:? io_uring/io-wq.c:285)
> > io_wq_enqueue (io_uring/io-wq.c:947)
> > io_queue_iowq (io_uring/io_uring.c:524)
> > io_req_task_submit (io_uring/io_uring.c:1511)
> > io_handle_tw_list (io_uring/io_uring.c:1198)
> >
> > Line numbers against commit 18daea77cca6 ("Merge tag 'for-linus' of
> > git://git.kernel.org/pub/scm/virt/kvm/kvm").
> >
> > These races involve writes and reads to the same memory location by
> > different tasks running on different CPUs. To mitigate this, refactor
> > the code to use atomic operations such as set_bit(), test_bit(), and
> > clear_bit() instead of basic "and" and "or" operations. This ensures
> > thread-safe manipulation of worker flags.
>
> Looks good, a few comments for v2:
>
> > diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
> > index 522196dfb0ff..6712d70d1f18 100644
> > --- a/io_uring/io-wq.c
> > +++ b/io_uring/io-wq.c
> > @@ -44,7 +44,7 @@ enum {
> > */
> > struct io_worker {
> > refcount_t ref;
> > - unsigned flags;
> > + unsigned long flags;
> > struct hlist_nulls_node nulls_node;
> > struct list_head all_list;
> > struct task_struct *task;
>
> This now creates a hole in the struct, maybe move 'lock' up after ref so
> that it gets filled and the current hole after 'lock' gets removed as
> well?
I am not sure I can see it. From my tests, we got the same hole, and the
struct size is the same. This is what I got with the change:
struct io_worker {
refcount_t ref; /* 0 4 */
/* XXX 4 bytes hole, try to pack */
raw_spinlock_t lock; /* 8 64 */
/* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
<snip>
/* size: 336, cachelines: 6, members: 14 */
/* sum members: 328, holes: 2, sum holes: 8 */
/* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
/* last cacheline: 16 bytes */
} __attribute__((__aligned__(8)));
This is what this current patch returns:
struct io_worker {
refcount_t ref; /* 0 4 */
/* XXX 4 bytes hole, try to pack */
long unsigned int flags; /* 8 8 */
<snip>
/* size: 336, cachelines: 6, members: 14 */
/* sum members: 328, holes: 2, sum holes: 8 */
/* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
/* last cacheline: 16 bytes */
} __attribute__((__aligned__(8)));
A possible suggestion is to move `create_index` after `ref. Then we can
get a more packed structure:
struct io_worker {
refcount_t ref; /* 0 4 */
int create_index; /* 4 4 */
long unsigned int flags; /* 8 8 */
struct hlist_nulls_node nulls_node; /* 16 16 */
struct list_head all_list; /* 32 16 */
struct task_struct * task; /* 48 8 */
struct io_wq * wq; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct io_wq_work * cur_work; /* 64 8 */
struct io_wq_work * next_work; /* 72 8 */
raw_spinlock_t lock; /* 80 64 */
/* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */
struct completion ref_done; /* 144 88 */
/* --- cacheline 3 boundary (192 bytes) was 40 bytes ago --- */
long unsigned int create_state; /* 232 8 */
struct callback_head create_work __attribute__((__aligned__(8))); /* 240 16 */
/* --- cacheline 4 boundary (256 bytes) --- */
union {
struct callback_head rcu __attribute__((__aligned__(8))); /* 256 16 */
struct work_struct work; /* 256 72 */
} __attribute__((__aligned__(8))); /* 256 72 */
/* size: 328, cachelines: 6, members: 14 */
/* forced alignments: 2 */
/* last cacheline: 8 bytes */
} __attribute__((__aligned__(8)));
How does it sound?
> And then I'd renumber the flags, they take bit offsets, not
> masks/values. Otherwise it's a bit confusing for someone reading the
> code, using masks with test/set bit functions.
Good point. What about something like?
enum {
IO_WORKER_F_UP = 0, /* up and active */
IO_WORKER_F_RUNNING = 1, /* account as running */
IO_WORKER_F_FREE = 2, /* worker on free list */
IO_WORKER_F_BOUND = 3, /* is doing bounded work */
};
Since we are now using WRITE_ONCE() in io_wq_worker, I am wondering if
this is what we want to do?
WRITE_ONCE(worker->flags, (IO_WORKER_F_UP| IO_WORKER_F_RUNNING) << 1);
Thanks
Powered by blists - more mailing lists