[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240502091617.GZ30852@noisy.programming.kicks-ass.net>
Date: Thu, 2 May 2024 11:16:17 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Tobias Huschle <huschle@...ux.ibm.com>,
Luis Machado <luis.machado@....com>,
Jason Wang <jasowang@...hat.com>,
Abel Wu <wuyun.abel@...edance.com>,
Linux Kernel <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
virtualization@...ts.linux.dev, netdev@...r.kernel.org,
nd <nd@....com>, borntraeger@...ux.ibm.com,
Ingo Molnar <mingo@...nel.org>,
Mike Galbraith <umgwanakikbuti@...il.com>
Subject: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add
lag based placement)
On Wed, May 01, 2024 at 11:31:02AM -0400, Michael S. Tsirkin wrote:
> On Wed, May 01, 2024 at 12:51:51PM +0200, Peter Zijlstra wrote:
> > I'm still wondering why exactly it is imperative for t2 to preempt t1.
> > Is there some unexpressed serialization / spin-waiting ?
>
>
> I am not sure but I think the point is that t2 is a kworker. It is
> much cheaper to run it right now when we are already in the kernel
> than return to userspace, let it run for a bit then interrupt it
> and then run t2.
> Right, Tobias?
So that is fundamentally a consequence of using a kworker.
So I tried to have a quick peek at vhost to figure out why you're using
kworkers... but no luck :/
Also, when I look at drivers/vhost/ it seems to implement it's own
worker and not use normal workqueues or even kthread_worker. Did we
really need yet another copy of all that?
Anyway, I tried to have a quick look at the code, but I can't seem to
get a handle on what and why it's doing things.
Powered by blists - more mailing lists