lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20100502205044.450beda2@infradead.org> Date: Sun, 2 May 2010 20:50:44 -0700 From: Arjan van de Ven <arjan@...radead.org> To: Eric Dumazet <eric.dumazet@...il.com> Cc: Andi Kleen <andi@...stfloor.org>, David Miller <davem@...emloft.net>, hadi@...erus.ca, xiaosuo@...il.com, therbert@...gle.com, shemminger@...tta.com, netdev@...r.kernel.org, lenb@...nel.org Subject: Re: [PATCH v6] net: batch skb dequeueing from softnet input_pkt_queue > > Also, I'm starting to wonder if Andi's patch to use io_schedule() > > needs to be replaced with a net_schedule() kind of thing. The > > cpuidle code currently has a weight factor for IO (based on > > measuring/experiments), and maybe networking really needs another > > factor... so just having a parallel concept with a different weight > > could be the right answer for that. > > > > But a task blocked on disk IO is probably blocked for a small amount > of time, while on network, it can be for a long time. I am not sure > its the right metric. it's not so much about the duration, as it is about the performance sensitivity.... > I was expecting something based on recent history. > Say if we have 20.000 wakeups per second, most likely we should not > enter C2/C3 states... we effectively do that. The thing is that C2 is so low cost normally that it's still worth it even at 20k wakeups... this is where the bios tells us how "heavy" the states are.... and 64 usec... is just not very much. -- Arjan van de Ven Intel Open Source Technology Centre For development, discussion and tips for power savings, visit http://www.lesswatts.org -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists