lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 10 May 2009 09:43:28 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	David Miller <davem@...emloft.net>
CC:	khc@...waw.pl, netdev@...r.kernel.org
Subject: Re: [PATCH] net: reduce number of reference taken on sk_refcnt

Eric Dumazet a écrit :
> David Miller a écrit :
>> From: David Miller <davem@...emloft.net>
>> Date: Sat, 09 May 2009 13:34:54 -0700 (PDT)
>>
>>> Consider the case where we always send some message on CPU A and
>>> then process the ACK on CPU B.  We'll always be cancelling the
>>> timer on a foreign cpu.
>> I should also mention that TCP has a peculiar optimization of timers
>> that is likely being thwarted by your workload.  It never deletes
>> timers under normal operation, it simply lets them still expire
>> and the handler notices that there is "nothing to do" and returns.
> 
> Yes, you refer to INET_CSK_CLEAR_TIMERS condition, never set.
> 
>> But when the connection does shut down, we have to purge all of
>> these timers.
>>
>> That could be another part of why you see timers in your profile.
>>
>>
> 
> Well, in my workload they should never expire, since application exchange
> enough data on both direction, and they are no losses (Gigabit LAN context)
> 
> On machine acting as a server (the one I am focusing to, of course),
> each incoming frame :
> 
> - Contains ACK for the previous sent frame
> - Contains data provided by the client.
> - Starts a timer for delayed ACK
> 
> Then server applications reacts and sends a new payload, and TCP stack
> - Sends a frame including ACK for previous received frame
> - Contains data provided by server application
> - Starts a timer for retransmiting this frame if no ACK is received later.
> 
> So yes, each incoming and each outgoing frame is going to call mod_timer()
> 
> Problem is that incoming process is done by CPU 0 (the one that is dedicated
> to NAPI processing because of stress situation, cpu 100% in softirq land),
> and outgoing processing done by other cpus in the machine.
> 
> offsetof(struct inet_connection_sock, icsk_retransmit_timer)=0x208
> offsetof(struct inet_connection_sock, icsk_delack_timer)=0x238
> 
> So there are cache line ping-pongs, but oprofile seems to point
> to a spinlock contention in lock_timer_base(), I dont know why...
> shouldnt (in my workload) delack_timer all belongs to cpu 0, and 
> retransmit_timers to other cpus ? 
> 
> Or is mod_timer never migrates an already established timer ?
> 
> That would explain the lock contention on timer_base, we should
> take care of it if possible.
> 

ftrace is my friend :)

Problem is the application, when doing it recv() call
is calling tcp_send_delayed_ack() too.

So yes, cpus are fighting on icsk_delack_timer and their
timer_base pretty hard.


2631.936051: finish_task_switch <-schedule
2631.936051: perf_counter_task_sched_in <-finish_task_switch
2631.936051: __perf_counter_sched_in <-perf_counter_task_sched_in
2631.936051: _spin_lock <-__perf_counter_sched_in
2631.936052: lock_sock_nested <-sk_wait_data
2631.936052: _spin_lock_bh <-lock_sock_nested
2631.936052: local_bh_disable <-_spin_lock_bh
2631.936052: local_bh_enable <-lock_sock_nested
2631.936052: finish_wait <-sk_wait_data
2631.936053: tcp_prequeue_process <-tcp_recvmsg
2631.936053: local_bh_disable <-tcp_prequeue_process
2631.936053: tcp_v4_do_rcv <-tcp_prequeue_process
2631.936053: tcp_rcv_established <-tcp_v4_do_rcv
2631.936054: local_bh_enable <-tcp_rcv_established
2631.936054: skb_copy_datagram_iovec <-tcp_rcv_established
2631.936054: memcpy_toiovec <-skb_copy_datagram_iovec
2631.936054: copy_to_user <-memcpy_toiovec
2631.936054: tcp_rcv_space_adjust <-tcp_rcv_established
2631.936055: local_bh_disable <-tcp_rcv_established
2631.936055: tcp_event_data_recv <-tcp_rcv_established
2631.936055: tcp_ack <-tcp_rcv_established
2631.936056: __kfree_skb <-tcp_ack
2631.936056: skb_release_head_state <-__kfree_skb
2631.936056: dst_release <-skb_release_head_state
2631.936056: skb_release_data <-__kfree_skb
2631.936056: put_page <-skb_release_data
2631.936057: kfree <-skb_release_data
2631.936057: kmem_cache_free <-__kfree_skb
2631.936057: tcp_valid_rtt_meas <-tcp_ack
2631.936058: bictcp_acked <-tcp_ack
2631.936058: bictcp_cong_avoid <-tcp_ack
2631.936058: tcp_is_cwnd_limited <-bictcp_cong_avoid
2631.936058: tcp_current_mss <-tcp_rcv_established
2631.936058: tcp_established_options <-tcp_current_mss
2631.936058: __tcp_push_pending_frames <-tcp_rcv_established
2631.936059: __tcp_ack_snd_check <-tcp_rcv_established
2631.936059: tcp_send_delayed_ack <-__tcp_ack_snd_check
2631.936059: sk_reset_timer <-tcp_send_delayed_ack
2631.936059: mod_timer <-sk_reset_timer
2631.936059: lock_timer_base <-mod_timer
2631.936059: _spin_lock_irqsave <-lock_timer_base
2631.936059: _spin_lock <-mod_timer
2631.936060: internal_add_timer <-mod_timer
2631.936064: _spin_unlock_irqrestore <-mod_timer
2631.936064: __kfree_skb <-tcp_rcv_established
2631.936064: skb_release_head_state <-__kfree_skb
2631.936064: dst_release <-skb_release_head_state
2631.936065: skb_release_data <-__kfree_skb
2631.936065: kfree <-skb_release_data
2631.936065: __slab_free <-kfree
2631.936065: add_partial <-__slab_free
2631.936065: _spin_lock <-add_partial
2631.936066: kmem_cache_free <-__kfree_skb
2631.936066: __slab_free <-kmem_cache_free
2631.936066: add_partial <-__slab_free
2631.936067: _spin_lock <-add_partial
2631.936067: local_bh_enable <-tcp_prequeue_process
2631.936067: tcp_cleanup_rbuf <-tcp_recvmsg
2631.936067: __tcp_select_window <-tcp_cleanup_rbuf
2631.936067: release_sock <-tcp_recvmsg
2631.936068: _spin_lock_bh <-release_sock
2631.936068: local_bh_disable <-_spin_lock_bh
2631.936068: _spin_unlock_bh <-release_sock
2631.936068: local_bh_enable_ip <-_spin_unlock_bh
2631.936068: fput <-sys_recvfrom

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ