[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+Y+fMqPbT-YyKcbQOH96+D=U8K4wnNgFkGYtjStKPrEQ@mail.gmail.com>
Date: Tue, 22 Oct 2019 16:24:31 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: netdev <netdev@...r.kernel.org>, Yuchung Cheng <ycheng@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>
Subject: Re: [Patch net-next 3/3] tcp: decouple TLP timer from RTO timer
On Tue, Oct 22, 2019 at 4:11 PM Cong Wang <xiyou.wangcong@...il.com> wrote:
>
> Currently RTO, TLP and PROBE0 all share a same timer instance
> in kernel and use icsk->icsk_pending to dispatch the work.
> This causes spinlock contention when resetting the timer is
> too frequent, as clearly shown in the perf report:
>
> 61.72% 61.71% swapper [kernel.kallsyms] [k] queued_spin_lock_slowpath
> ...
> - 58.83% tcp_v4_rcv
> - 58.80% tcp_v4_do_rcv
> - 58.80% tcp_rcv_established
> - 52.88% __tcp_push_pending_frames
> - 52.88% tcp_write_xmit
> - 28.16% tcp_event_new_data_sent
> - 28.15% sk_reset_timer
> + mod_timer
> - 24.68% tcp_schedule_loss_probe
> - 24.68% sk_reset_timer
> + 24.68% mod_timer
>
> This patch decouples TLP timer from RTO timer by adding a new
> timer instance but still uses icsk->icsk_pending to dispatch,
> in order to minimize the risk of this patch.
>
> After this patch, the CPU time spent in tcp_write_xmit() reduced
> down to 10.92%.
What is the exact benchmark you are running ?
We never saw any contention like that, so lets make sure you are not
working around another issue.
Powered by blists - more mailing lists