[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB=BE-QaNBn1cVK6c7LM2cLpH_Ck_9SYw-YDYEnNrtwfoyu81Q@mail.gmail.com>
Date: Wed, 14 Jun 2023 11:49:53 -0700
From: Sandeep Dhavale <dhavale@...gle.com>
To: Tejun Heo <tj@...nel.org>
Cc: jiangshanlai@...il.com, torvalds@...ux-foundation.org,
peterz@...radead.org, linux-kernel@...r.kernel.org,
kernel-team@...a.com, joshdon@...gle.com, brho@...gle.com,
briannorris@...omium.org, nhuck@...gle.com, agk@...hat.com,
snitzer@...nel.org, void@...ifault.com, kernel-team@...roid.com,
Swapnil Sapkal <swapnil.sapkal@....com>, kprateek.nayak@....com
Subject: Re: [PATCH 14/24] workqueue: Generalize unbound CPU pods
Hi Tejun,
Thank you for your patches! I tested the affinity-scopes-v2 with app launch
benchmarks. The numbers below are total scheduling latency for erofs kworkers
and last column is with percpu highpri kthreads that is
CONFIG_EROFS_FS_PCPU_KTHREAD=y
CONFIG_EROFS_FS_PCPU_KTHREAD_HIPRI=y
Scheduling latency is the latency between when the task became eligible to run
to when it actually started running. The test does 50 cold app launches for each
and aggregates the numbers.
| Table | Upstream | Cache nostrict | CPU nostrict | PCPU hpri |
|--------------+----------+----------------+--------------+-----------|
| Average (us) | 12286 | 7440 | 4435 | 2717 |
| Median (us) | 12528 | 3901 | 3258 | 2476 |
| Minimum (us) | 287 | 555 | 638 | 357 |
| Maximum (us) | 35600 | 35911 | 13364 | 6874 |
| Stdev (us) | 7918 | 7503 | 3323 | 1918 |
|--------------+----------+----------------+--------------+-----------|
We see here that with affinity-scopes-v2 (which defaults to cache nostrict),
there is a good improvement when compared to the current codebase.
Affinity scope "CPU nostrict" for erofs workqueue has even better numbers
for my test launches and it resembles logically to percpu highpri kthreads
approach. Percpu highpri kthreads has the lowest latency and variation,
probably down to running at higher priority as those threads are set to
sched_set_fifo_low().
At high level, the app launch numbers itself improved with your series as
entire workqueue subsystem improved across the board.
Thanks,
Sandeep.
On Thu, Jun 8, 2023 at 8:43 PM 'K Prateek Nayak' via kernel-team
<kernel-team@...roid.com> wrote:
>
> Hello Tejun,
>
> On 6/9/2023 4:20 AM, Tejun Heo wrote:
> > Hello,
> >
> > On Thu, Jun 08, 2023 at 08:31:34AM +0530, K Prateek Nayak wrote:
> >> [..snip..]
> >> o I consistently see a WARN_ON_ONCE() in kick_pool() being hit when I
> >> run "sudo ./stress-ng --iomix 96 --timeout 1m". I've seen few
> >> different stack traces so far. Including all below just in case:
> > ...
> >> This is the same WARN_ON_ONCE() you had added in the HEAD commit:
> >>
> >> $ scripts/faddr2line vmlinux kick_pool+0xdb
> >> kick_pool+0xdb/0xe0:
> >> kick_pool at kernel/workqueue.c:1130 (discriminator 1)
> >>
> >> $ sed -n 1130,1132p kernel/workqueue.c
> >> if (!WARN_ON_ONCE(wake_cpu >= nr_cpu_ids))
> >> p->wake_cpu = wake_cpu;
> >> get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++;
> >>
> >> Let me know if you need any more data from my test setup.
> >> P.S. The kernel is still up and running (~30min) despite hitting this
> >> WARN_ON_ONCE() in my case :)
> >
> > Okay, that was me being stupid and not initializing the new fields for
> > per-cpu workqueues. Can you please test the following branch? It should have
> > both bugs fixed properly.
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git affinity-scopes-v2
>
> I've not run into any panics or warnings with this one. Kernel has been
> stable for ~30min while running stress-ng iomix. We'll resume the testing
> with v2 :)
>
> >
> > If that doesn't crash, I'd love to hear how it affects the perf regressions
> > reported over that past few months.>
> > Thanks.
> >
>
> --
> Thanks and Regards,
> Prateek
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@...roid.com.
>
Powered by blists - more mailing lists