lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <AANLkTimC4z7mCKwyK8=LaacjTtZGEGN_BYsFMydQiwBk@mail.gmail.com> Date: Sat, 4 Sep 2010 09:38:30 -0400 From: Chetan Loke <chetanloke@...il.com> To: Eric Dumazet <eric.dumazet@...il.com> Cc: Bhavesh Davda <bhavesh@...are.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "pv-drivers@...are.com" <pv-drivers@...are.com>, "therbert@...gle.com" <therbert@...gle.com> Subject: Re: [Pv-drivers] rps and pvdrivers Hi Eric, On Sat, Sep 4, 2010 at 2:51 AM, Eric Dumazet <eric.dumazet@...il.com> wrote: > Reproduce what exactly ? > > I dont understand what the problem is, reading your description. With 2.6.35 the rx-path performance on a 10G vNIC is way too low. If you setup a simple 'recvfrom' loop on a promiscuous interface you can see this easily. My use-case is to achieve high speed pkt capturing. > RPS is not automatically switched on, you have to configure it. > > echo ffff >/sys/class/net/eth0/queues/rx-0/rps_cpus > > Same for RFS if you prefer to use RFS > > echo 16384 >/sys/class/net/eth0/queues/rx-0/rps_flow_cnt > Ok, thanks for sharing this. I tried this and still doesn't help. > If you receive a flood, your cpu stay in NAPI mode, and no hardware > interrupt is received while you process xxx.xxx packets per second. > I see. Ok then that's whats happening. I guess I will have to look at the napi/ksoftirq/rps block in detail to understand this. But I would think that even w/o the rps settings I should still get the same numbers as compared to non-rps case, correct? On a VM(virtual machine) using a 1G vNIC I can capture ~250K pkts/sec(even higher in some cases). But I can't go beyond 100K pkts/sec on a 10G vNIC because ksoftirqd consumes 1-cpu 100% of the time. That's why I thought of switching to the 2.6.35 kernel to see if I could scale on 10G. It's possible that a VM cannot handle that much load. So I tried sending only 10% of line-rate(10G) which is 1G. It still doesn't work. I still can't capture that many pkts. Chetan Loke -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists