lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <1272797690.2173.26.camel@edumazet-laptop> Date: Sun, 02 May 2010 12:54:50 +0200 From: Eric Dumazet <eric.dumazet@...il.com> To: Andi Kleen <andi@...stfloor.org> Cc: David Miller <davem@...emloft.net>, hadi@...erus.ca, xiaosuo@...il.com, therbert@...gle.com, shemminger@...tta.com, netdev@...r.kernel.org, lenb@...nel.org, arjan@...radead.org Subject: Re: [PATCH v6] net: batch skb dequeueing from softnet input_pkt_queue Le dimanche 02 mai 2010 à 11:20 +0200, Andi Kleen a écrit : > > I tried it on the right spot (since my bench was only doing recvmsg() > > calls, I had to patch wait_for_packet() in net/core/datagram.c > > > > udp_recvmsg -> __skb_recv_datagram -> wait_for_packet -> > > schedule_timeout > > > > Unfortunatly, using io_schedule_timeout() did not solve the problem. > > Hmm, too bad. Weird. > > > > > Tell me if you need some traces or something. > > I'll try to reproduce it and see what I can do. > Here the perf report on the latest test done, I confirm I am using io_schedule_timeout() in this kernel. In this test, all 16 queues of one BCM57711E NIC (1Gb link) delivers packets at about 1.300.000 pps to 16 cpus (one cpu per queue) and these packets are then redistributed by RPS to same 16 cpus, generating about 650.000 IPI per second. top says : Cpu(s): 3.0%us, 17.3%sy, 0.0%ni, 22.4%id, 28.2%wa, 0.0%hi, 29.1%si, 0.0%st # Samples: 321362570767 # # Overhead Command Shared Object Symbol # ........ .............. ............................ ...... # 25.08% init [kernel.kallsyms] [k] _raw_spin_lock_irqsave | --- _raw_spin_lock_irqsave | |--93.47%-- clockevents_notify | lapic_timer_state_broadcast | acpi_idle_enter_bm | cpuidle_idle_call | cpu_idle | start_secondary | |--4.70%-- tick_broadcast_oneshot_control | tick_notify | notifier_call_chain | __raw_notifier_call_chain | raw_notifier_call_chain | clockevents_do_notify | clockevents_notify | lapic_timer_state_broadcast | acpi_idle_enter_bm | cpuidle_idle_call | cpu_idle | start_secondary | |--0.64%-- generic_exec_single | __smp_call_function_single | net_rps_action_and_irq_enable ... 9.72% init [kernel.kallsyms] [k] acpi_os_read_port | --- acpi_os_read_port | |--99.45%-- acpi_hw_read_port | acpi_hw_read | acpi_hw_read_multiple | acpi_hw_register_read | acpi_read_bit_register | acpi_idle_enter_bm | cpuidle_idle_call | cpu_idle | start_secondary | --0.55%-- acpi_hw_read acpi_hw_read_multiple powertop says : PowerTOP version 1.11 (C) 2007 Intel Corporation Cn Avg residency P-states (frequencies) C0 (cpu running) (68.9%) 2.93 Ghz 46.5% polling 0.0ms ( 0.0%) 2.80 Ghz 5.1% C1 mwait 0.0ms ( 0.0%) 2.53 Ghz 3.0% C2 mwait 0.0ms (31.1%) 2.13 Ghz 2.8% 1.60 Ghz 38.2% Wakeups-from-idle per second : 45177.8 interval: 5.0s no ACPI power usage estimate available Top causes for wakeups: 9.9% (40863.0) <interrupt> : eth1-fp-7 9.9% (40861.0) <interrupt> : eth1-fp-8 9.9% (40858.0) <interrupt> : eth1-fp-5 9.9% (40855.2) <interrupt> : eth1-fp-10 9.9% (40847.6) <interrupt> : eth1-fp-14 9.9% (40847.2) <interrupt> : eth1-fp-12 9.9% (40835.0) <interrupt> : eth1-fp-1 9.9% (40834.2) <interrupt> : eth1-fp-3 9.9% (40834.0) <interrupt> : eth1-fp-6 9.9% (40829.6) <interrupt> : eth1-fp-4 1.0% (4002.0) <kernel core> : hrtimer_start_range_ns (tick_sched_timer) 0.4% (1725.6) <interrupt> : extra timer interrupt 0.0% ( 4.0) <kernel core> : usb_hcd_poll_rh_status (rh_timer_func) 0.0% ( 2.0) <kernel core> : clocksource_watchdog (clocksource_watchdog) 0.0% ( 2.0) snmpd : hrtimer_start_range_ns (hrtimer_wakeup) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists