[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5fa6744332000be5e914e32d205b634e22bc4f4f.camel@redhat.com>
Date: Wed, 28 May 2025 13:27:29 +0200
From: Gabriele Monaco <gmonaco@...hat.com>
To: Nam Cao <namcao@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
linux-trace-kernel@...r.kernel.org, linux-doc@...r.kernel.org, Ingo Molnar
<mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Tomas Glozar
<tglozar@...hat.com>, Juri Lelli <jlelli@...hat.com>
Subject: Re: [RFC PATCH v2 12/12] rv: Add opid per-cpu monitor
On Tue, 2025-05-27 at 16:50 +0200, Nam Cao wrote:
> On Tue, May 27, 2025 at 04:35:04PM +0200, Gabriele Monaco wrote:
> > Thanks for trying it out, and good to know about this stressor.
> > Unfortunately it's a bit hard to understand from this stack trace,
> > but
> > that's very likely a problem in the model. I have a few ideas
> > where that
> > could be but I believe it's something visible only on a physical
> > machine
> > (haven't tested much on x86 bare metal, only VM).
> >
> > You're running on bare metal right?
>
> No, it's QEMU:
>
> qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 \
> -nographic \
> -drive if=virtio,format=raw,file=bookworm.img \
> -kernel /srv/work/namcao/linux/arch/x86/boot/bzImage \
> -append "console=ttyS0 root=/dev/vda rw" \
>
> The kernel is just x86 defconfig + the monitors.
>
Apparently the error is visible on non-PREEMPT_RT only, the models are
designed for preempt-rt and I didn't really test them elsewhere.
Not sure if it's worth tailoring them for non RT kernels, but for now I
can just mark those monitors as RT-only via Kconfig.
Especially this type of monitors is describing very accurately the
preemptiveness of some events, I wouldn't be too surprised if some
rules don't hold in all preempt configurations.
The idea is that, as long as the models stand true, some assumptions
about latency can be made, on the long run this type of assumptions is
likely different across preemption models.
That said, it might be a stupid mistake as well, so I'd look into that
more closely ;)
Thanks again,
Gabriele
Powered by blists - more mailing lists