[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201023104853.55ef1c20@kicinski-fedora-PC1C0HJN.hsd1.ca.comcast.net>
Date: Fri, 23 Oct 2020 10:48:53 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Josh Don <joshdon@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"David S. Miller" <davem@...emloft.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Paolo Bonzini <pbonzini@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, Xi Wang <xii@...gle.com>
Subject: Re: [PATCH 1/3] sched: better handling for busy polling loops
On Thu, 22 Oct 2020 20:29:42 -0700 Josh Don wrote:
> Busy polling loops in the kernel such as network socket poll and kvm
> halt polling have performance problems related to process scheduler load
> accounting.
>
> Both of the busy polling examples are opportunistic - they relinquish
> the cpu if another thread is ready to run.
That makes it sound like the busy poll code is trying to behave like an
idle task. I thought need_resched() meant we leave when we run out of
slice, or kernel needs to go through a resched for internal reasons. No?
> This design, however, doesn't
> extend to multiprocessor load balancing very well. The scheduler still
> sees the busy polling cpu as 100% busy and will be less likely to put
> another thread on that cpu. In other words, if all cores are 100%
> utilized and some of them are running real workloads and some others are
> running busy polling loops, newly woken up threads will not prefer the
> busy polling cpus. System wide throughput and latency may suffer.
IDK how well this extends to networking. Busy polling in networking is
a conscious trade-off of CPU for latency, if application chooses to
busy poll (which isn't the default) we should respect that.
Is your use case primarily kvm?
Powered by blists - more mailing lists