[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZWbapeL34Z8AMR5f@DESKTOP-2CCOB1S.>
Date: Wed, 29 Nov 2023 07:31:01 +0100
From: Tobias Huschle <huschle@...ux.ibm.com>
To: Abel Wu <wuyun.abel@...edance.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linux Kernel <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
virtualization@...ts.linux.dev, netdev@...r.kernel.org, mst@...hat.com,
jasowang@...hat.com
Subject: Re: Re: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6
sched/fair: Add lag based placement)
On Tue, Nov 28, 2023 at 04:55:11PM +0800, Abel Wu wrote:
> On 11/27/23 9:56 PM, Tobias Huschle Wrote:
> > On Wed, Nov 22, 2023 at 11:00:16AM +0100, Peter Zijlstra wrote:
> > > On Tue, Nov 21, 2023 at 02:17:21PM +0100, Tobias Huschle wrote:
[...]
> > - At depth 4, the cgroup shows the observed vruntime value which is smaller
> > by a factor of 20, but depth 0 seems to be running with values of the
> > correct magnitude.
>
> A child is running means its parent also being the cfs->curr, but
> not vice versa if there are more than one child.
>
> > - cgroup at depth 0 has zero lag, with higher depth, there are large lag
> > values (as observed 606.338267 onwards)
>
> These values of se->vlag means 'run this entity to parity' to avoid
> excess context switch, which is what RUN_TO_PARITY does, or nothing
> when !RUN_TO_PARITY. In short, se->vlag is not vlag when se->on_rq.
>
Thanks for clarifying that. This makes things clearer to me.
> >
> > Now the following occurs, triggered by the vhost:
> > - The kworker gets placed again with:
> > vruntime deadline
> > cgroup 56117619190 57650477291 -> depth 0, last known value
> > kworker 56117885776 56120885776 -> lag of -725
> > - vhost continues executing and updates its vruntime accordingly, here
> > I would need to enhance the trace to also print the vruntimes of the
> > parent sched_entities to see the progress of their vruntime/deadline/lag
> > values as well
> > - It is a bit irritating that the EEVDF algorithm would not pick the kworker
> > over the cgroup as its deadline is smaller.
> > But, the kworker has negative lag, which might cause EEVDF to not pick
> > the kworker.
> > The cgroup at depth 0 has no lag, all deeper layers have a significantly
> > positve lag (last known values, might have changed in the meantime).
> > At this point I would see the option that the vhost task is stuck
> > somewhere or EEVDF just does not see the kworker as a eligible option.
>
> IMHO such lag should not introduce that long delay. Can you run the
> test again with NEXT_BUDDY disabled?
I added a trace event to the next buddy path, it does not get triggered, so I'd
assume that no buddies are selected.
>
> >
> > - Once the vhost is migrated off the cpu, the update_entity_lag function
> > works with the following values at 606.467022: sched_update
> > For the cgroup at depth 0
> > - vruntime = 57104166665 --> this is in line with the amount of new timeslices
> > vhost got assigned while the kworker was waiting
> > - vlag = -62439022 --> the scheduler knows that the cgroup was
> > overconsuming, but no runtime for the kworker
> > For the cfs_rq we have
> > - min_vruntime = 56117885776 --> this matches the vruntime of the kworker
> > - avg_vruntime = 161750065796 --> this is rather large in comparison, but I
> > might access this value at a bad time
>
> Use avg_vruntime() instead.
Fair.
[...]
> >
> > ######################### full trace #########################
> >
> > sched_bestvnode: v=vruntime,d=deadline,l=vlag,md=min_deadline,dp=depth
> > --> during __pick_eevdf, prints values for best and the first node loop variable, second loop is never executed
> >
> > sched_place/sched_update: sev=se->vruntime,sed=se->deadline,sev=se->vlag,avg=cfs_rq->avg_vruntime,min=cfs_rq->min_vruntime
>
> It would be better replace cfs_rq->avg_vruntime with avg_vruntime().
> Although we can get real @avg by (vruntime + vlag), I am not sure
> vlag (@lag in trace) is se->vlag or the local variable in the place
> function which is scaled and no longer be the true vlag.
>
Oh my bad, sev is the vlag value of the sched_entity, lag is the local variable.
[...]
> > vhost-2931-2953 [013] d.... 606.338313: sched_wakeup: comm=kworker/13:1 pid=168 prio=120 target_cpu=013
> > --> kworker set to runnable, but vhost keeps on executing
>
> What are the weights of the two entities?
I'll do another run and look at those values.
[...]
Powered by blists - more mailing lists