[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230316202648.1f8c2f80@kernel.org>
Date: Thu, 16 Mar 2023 20:26:48 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: jbrouer@...hat.com, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com,
stephen@...workplumber.org, simon.horman@...igine.com,
sinquersw@...il.com, bpf@...r.kernel.org, netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH v4 net-next 2/2] net: introduce budget_squeeze to help
us tune rx behavior
On Fri, 17 Mar 2023 10:27:11 +0800 Jason Xing wrote:
> > That is the common case, and can be understood from the napi trace
>
> Thanks for your reply. It is commonly happening every day on many servers.
Right but the common issue is the time squeeze, not budget squeeze,
and either way the budget squeeze doesn't really matter because
the softirq loop will call us again soon, if softirq itself is
not scheduled out.
So if you want to monitor a meaningful event in your fleet, I think
a better event to monitor is the number of times ksoftirqd was woken
up and latency of it getting onto the CPU.
Did you try to measure that?
(Please do *not* send patches to touch softirq code right now, just
measure first. We are trying to improve the situation but the core
kernel maintainers are weary of changes:
https://lwn.net/Articles/925540/
so if both of us start sending code they will probably take neither
patches :()
> > point and probing the kernel with bpftrace. We should only add
>
> We probably can deduce (or guess) which one causes the latency because
> trace_napi_poll() only counts the budget consumed per poll.
>
> Besides, tracing napi poll is totally ok with the testbed but not ok
> with those servers with heavy load which bpftrace related tools
> capturing the data from the hot path may cause some bad impact,
> especially with special cards equipped, say, 100G nic card. Resorting
> to legacy file softnet_stat is relatively feasible based on my limited
> knowledge.
Right, but we're still measuring something relatively irrelevant.
As I said the softirq loop will call us again. In my experience
network queues get long when ksoftirqd is woken up but not scheduled
for a long time. That is the source of latency. You may have the same
problem (high latency) without consuming the entire budget.
I think if we wanna make new stats we should try to come up with a way
of capturing the problem rather than one of the symptoms.
> Paolo also added backlog queues into this file in 2020 (see commit:
> 7d58e6555870d). I believe that after this patch, there are few or no
> more new data that is needed to print for the next few years.
>
> > uAPI for statistics which must be maintained contiguously. For
>
> In this patch, I didn't touch the old data as suggested in the
> previous emails and only separated the old way of counting
> @time_squeeze into two parts (time_squeeze and budget_squeeze). Using
> budget_squeeze can help us profile the server and tune it more
> usefully.
>
> > investigations tracing will always be orders of magnitude more
> > powerful :(
>
> > On the time squeeze BTW, have you found out what the problem was?
> > In workloads I've seen the time problems are often because of noise
> > in how jiffies are accounted (cgroup code disables interrupts
> > for long periods of time, for example, making jiffies increment
> > by 2, 3 or 4 rather than by 1).
>
> Yes ! The issue of jiffies increment troubles those servers more often
> than not. For a small group of servers, budget limit is also a
> problem. Sometimes we might treat guest OS differently.
Powered by blists - more mailing lists