[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11ded843-ac08-2306-ad0f-586978d038b1@kernel.dk>
Date: Mon, 24 Jul 2023 09:47:21 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Oleksandr Natalenko <oleksandr@...alenko.name>,
stable@...r.kernel.org, Genes Lists <lists@...ience.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Pavel Begunkov <asml.silence@...il.com>,
io-uring@...r.kernel.org, linux-kernel@...r.kernel.org,
Andres Freund <andres@...razel.de>
Subject: Re: [PATCH 6.4 800/800] io_uring: Use io_schedule* in cqring wait
On 7/23/23 12:06?PM, Oleksandr Natalenko wrote:
> Hello.
>
> On ned?le 23. ?ervence 2023 19:43:50 CEST Genes Lists wrote:
>> On 7/23/23 11:31, Jens Axboe wrote:
>> ...
>>> Just read the first one, but this is very much expected. It's now just
>>> correctly reflecting that one thread is waiting on IO. IO wait being
>>> 100% doesn't mean that one core is running 100% of the time, it just
>>> means it's WAITING on IO 100% of the time.
>>>
>>
>> Seems reasonable thank you.
>>
>> Question - do you expect the iowait to stay high for a freshly created
>> mariadb doing nothing (as far as I can tell anyway) until process
>> exited? Or Would you think it would drop in this case prior to the
>> process exiting.
>>
>> For example I tried the following - is the output what you expect?
>>
>> Create a fresh mariab with no databases - monitor the core showing the
>> iowaits with:
>>
>> mpstat -P ALL 2 100
>>
>> # rm -f /var/lib/mysql/*
>> # mariadb-install-db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
>>
>> # systemctl start mariadb (iowaits -> 100%)
>>
>>
>> # iotop -bo |grep maria (shows no output, iowait stays 100%)
>>
>> (this persists until mariadb process exits)
>>
>>
>> # systemctl stop mariadb (iowait drops to 0%)
>
> This is a visible userspace behaviour change with no changes in the
> userspace itself, so we cannot just ignore it. If for some reason this
> is how it should be now, how do we explain it to MariaDB devs to get
> this fixed?
It's not a behavioural change, it's a reporting change. There's no
functionality changing here. That said, I do think we should narrow it a
bit so we're only marked as in iowait if the task waiting has pending
IO. That should still satisfy the initial problem, and it won't flag
iowait on mariadb like cases where they have someone else just
perpetually waiting on requests.
As a side effect, this also removes the flush that wasn't at all
necessary on io_uring.
If folks are able to test the below, that would be appreciated.
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 89a611541bc4..f4591b912ea8 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2493,11 +2493,20 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx)
return 0;
}
+static bool current_pending_io(void)
+{
+ struct io_uring_task *tctx = current->io_uring;
+
+ if (!tctx)
+ return false;
+ return percpu_counter_read_positive(&tctx->inflight);
+}
+
/* when returns >0, the caller should retry */
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
struct io_wait_queue *iowq)
{
- int token, ret;
+ int io_wait, ret;
if (unlikely(READ_ONCE(ctx->check_cq)))
return 1;
@@ -2511,17 +2520,19 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
return 0;
/*
- * Use io_schedule_prepare/finish, so cpufreq can take into account
- * that the task is waiting for IO - turns out to be important for low
- * QD IO.
+ * Mark us as being in io_wait if we have pending requests, so cpufreq
+ * can take into account that the task is waiting for IO - turns out
+ * to be important for low QD IO.
*/
- token = io_schedule_prepare();
+ io_wait = current->in_iowait;
+ if (current_pending_io())
+ current->in_iowait = 1;
ret = 0;
if (iowq->timeout == KTIME_MAX)
schedule();
else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
ret = -ETIME;
- io_schedule_finish(token);
+ current->in_iowait = io_wait;
return ret;
}
--
Jens Axboe
Powered by blists - more mailing lists