[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW7GhYyfNOQg3VovU7cqC0nnRTbm1B7bFkWWa75k8YgHew@mail.gmail.com>
Date: Fri, 17 Dec 2021 09:25:25 -0800
From: Song Liu <song@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Yajun Deng <yajun.deng@...ux.dev>, masahiroy@...nel.org,
williams@...hat.com, Paul Menzel <pmenzel@...gen.mpg.de>,
open list <linux-kernel@...r.kernel.org>,
linux-rt-users@...r.kernel.org,
linux-raid <linux-raid@...r.kernel.org>, stable@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v3] lib/raid6: Reduce high latency by using migrate
instead of preempt
On Fri, Dec 17, 2021 at 5:42 AM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
>
> On 2021-12-17 10:16:10 [+0800], Yajun Deng wrote:
> > We found an abnormally high latency when executing modprobe raid6_pq, the
> > latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than
> > 67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y.
> >
> > How to reproduce:
> > - Install cyclictest
> > sudo apt install rt-tests
> > - Run cyclictest example in one terminal
> > sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m
> > - Modprobe raid6_pq in another terminal
> > sudo modprobe raid6_pq
> >
> > This is caused by ksoftirqd fail to scheduled due to disable preemption,
> > this time is too long and unreasonable.
> >
> > Reduce high latency by using migrate_disabl()/emigrate_enable() instead of
> > preempt_disable()/preempt_enable(), the latency won't greater than 100us.
> >
> > This patch beneficial for CONFIG_PREEMPT=y or CONFIG_PREEMPT_RT=y, but no
> > effect for CONFIG_PREEMPT_VOLUNTARY=y.
>
> Why does it matter? This is only during boot-up/ module loading or do I
> miss something?
Yes this only happens on boot-up and module loading.I don't know RT well
enough to tell whether latency during module loading is an issue.
> The delay is a jiffy so it depends on CONFIG_HZ. You do benchmark for
> the best algorithm and if you get preempted during that period then your
> results may be wrong and you make a bad selection.
With current code, the delay _should be_ 16 jiffies. However, the experiment
hits way longer latencies. I agree this may cause inaccurate benchmark results
and thus suboptimal RAID algorithm.
I guess the key question is whether long latency at module loading time matters.
If that doesn't matter, we should just drop this.
Thanks,
Song
Powered by blists - more mailing lists