[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200116162630.6r3xc55kdyyq5tvz@steredhat>
Date: Thu, 16 Jan 2020 17:26:30 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: Alexander Viro <viro@...iv.linux.org.uk>, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] io_uring: wakeup threads waiting for EPOLLOUT events
On Thu, Jan 16, 2020 at 09:00:24AM -0700, Jens Axboe wrote:
> On 1/16/20 8:55 AM, Stefano Garzarella wrote:
> > On Thu, Jan 16, 2020 at 08:29:07AM -0700, Jens Axboe wrote:
> >> On 1/16/20 6:49 AM, Stefano Garzarella wrote:
> >>> io_uring_poll() sets EPOLLOUT flag if there is space in the
> >>> SQ ring, then we should wakeup threads waiting for EPOLLOUT
> >>> events when we expose the new SQ head to the userspace.
> >>>
> >>> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
> >>> ---
> >>>
> >>> Do you think is better to change the name of 'cq_wait' and 'cq_fasync'?
> >>
> >> I honestly think it'd be better to have separate waits for in/out poll,
> >> the below patch will introduce some unfortunate cacheline traffic
> >> between the submitter and completer side.
> >
> > Agree, make sense. I'll send a v2 with a new 'sq_wait'.
> >
> > About fasync, do you think could be useful the POLL_OUT support?
> > In this case, maybe is not simple to have two separate fasync_struct,
> > do you have any advice?
>
> The fasync should not matter, it's all in the checking of whether the sq
> side has any sleepers. This is rarely going to be the case, so as long
> as we can keep the check cheap, then I think we're fine.
Right.
>
> Since the use case is mostly single submitter, unless you're doing
> something funky or unusual, you're not going to be needing POLLOUT ever.
The case that I had in mind was with kernel side polling enabled and
a single submitter that can use epoll() to wait free slots in the SQ
ring. (I don't have a test, maybe I can write one...)
> Hence I don't want to add any cost for it, I'd even advocate just doing
> waitqueue_active() perhaps, if we can safely pull it off.
I'll try!
Thanks,
Stefano
Powered by blists - more mailing lists