[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211216092651.18878-1-yajun.deng@linux.dev>
Date: Thu, 16 Dec 2021 17:26:51 +0800
From: Yajun Deng <yajun.deng@...ux.dev>
To: song@...nel.org, pmenzel@...gen.mpg.de, williams@...hat.com,
masahiroy@...nel.org
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
linux-raid@...r.kernel.org, Yajun Deng <yajun.deng@...ux.dev>
Subject: [PATCH v2] lib/raid6: Reduce high latency by using migrate instead of preempt
We found an abnormally high latency when executing modprobe raid6_pq, the
latency is greater than 1.2s when CONFIG_PREEMPT_VOLUNTARY=y, greater than
67ms when CONFIG_PREEMPT=y, and greater than 16ms when CONFIG_PREEMPT_RT=y.
This is caused by ksoftirqd fail to scheduled due to disable preemption,
this time is too long and unreasonable.
Reduce high latency by using migrate_disabl()/emigrate_enable() instead of
preempt_disable()/preempt_enable().
How to reproduce:
- Install cyclictest
sudo apt install rt-tests
- Run cyclictest example in one terminal
sudo cyclictest -S -p 95 -d 0 -i 1000 -D 24h -m
- Modprobe raid6_pq in another terminal
sudo modprobe raid6_pq
This patch beneficial for CONFIG_PREEMPT=y and CONFIG_PREEMPT_RT=y, but
no effect for CONFIG_PREEMPT_VOLUNTARY=y.
Fixes: fe5cbc6e06c7 ("md/raid6 algorithms: delta syndrome functions")
Fixes: cc4589ebfae6 ("Rename raid6 files now they're in a 'raid6' directory.")
Link: https://lore.kernel.org/linux-raid/b06c5e3ef3413f12a2c2b2a241005af9@linux.dev/T/#t
Signed-off-by: Yajun Deng <yajun.deng@...ux.dev>
---
lib/raid6/algos.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/raid6/algos.c b/lib/raid6/algos.c
index 6d5e5000fdd7..21611d05c34c 100644
--- a/lib/raid6/algos.c
+++ b/lib/raid6/algos.c
@@ -162,7 +162,7 @@ static inline const struct raid6_calls *raid6_choose_gen(
perf = 0;
- preempt_disable();
+ migrate_disable();
j0 = jiffies;
while ((j1 = jiffies) == j0)
cpu_relax();
@@ -171,7 +171,7 @@ static inline const struct raid6_calls *raid6_choose_gen(
(*algo)->gen_syndrome(disks, PAGE_SIZE, *dptrs);
perf++;
}
- preempt_enable();
+ migrate_enable();
if (perf > bestgenperf) {
bestgenperf = perf;
@@ -186,7 +186,7 @@ static inline const struct raid6_calls *raid6_choose_gen(
perf = 0;
- preempt_disable();
+ migrate_disable();
j0 = jiffies;
while ((j1 = jiffies) == j0)
cpu_relax();
@@ -196,7 +196,7 @@ static inline const struct raid6_calls *raid6_choose_gen(
PAGE_SIZE, *dptrs);
perf++;
}
- preempt_enable();
+ migrate_enable();
if (best == *algo)
bestxorperf = perf;
--
2.32.0
Powered by blists - more mailing lists