[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZjOFxbsHuQZ+Zltu@DESKTOP-2CCOB1S.>
Date: Thu, 2 May 2024 14:23:33 +0200
From: Tobias Huschle <huschle@...ux.ibm.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Luis Machado <luis.machado@....com>,
Jason Wang <jasowang@...hat.com>, Abel Wu <wuyun.abel@...edance.com>,
Linux Kernel <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
virtualization@...ts.linux.dev, netdev@...r.kernel.org,
nd <nd@....com>, borntraeger@...ux.ibm.com,
Ingo Molnar <mingo@...nel.org>,
Mike Galbraith <umgwanakikbuti@...il.com>
Subject: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add
lag based placement)
On Wed, May 01, 2024 at 11:31:02AM -0400, Michael S. Tsirkin wrote:
> On Wed, May 01, 2024 at 12:51:51PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 30, 2024 at 12:50:05PM +0200, Tobias Huschle wrote:
<...>
> >
> > I'm still wondering why exactly it is imperative for t2 to preempt t1.
> > Is there some unexpressed serialization / spin-waiting ?
>
>
> I am not sure but I think the point is that t2 is a kworker. It is
> much cheaper to run it right now when we are already in the kernel
> than return to userspace, let it run for a bit then interrupt it
> and then run t2.
> Right, Tobias?
>
That would be correct, the optimal scenario would be that t1, the vhost
does its thing, wakes up t2, the kworker, makes sure t2 executes immediately,
then gets control again and continues watching its loops without ever
leaving kernel space.
Powered by blists - more mailing lists