lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <54E375B2.8090101@redhat.com> Date: Tue, 17 Feb 2015 18:09:06 +0100 From: Paolo Bonzini <pbonzini@...hat.com> To: Igor Mammedov <imammedo@...hat.com>, "Michael S. Tsirkin" <mst@...hat.com> CC: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, netdev@...r.kernel.org Subject: Re: [PATCH] vhost: support upto 509 memory regions On 17/02/2015 16:02, Igor Mammedov wrote: >> > >> > Not if there are about 6 regions, I think. > When memslots where increased to 509 and look up of them was replaced on > binary search results were on par with linear search for a default 13 memslots VM. > > Adding LRU You mean MRU. :) > cache helped to shave ~40% of cycles for sequential lookup workloads. It's a bit different for vhost because you can have up to four "things" being looked up at the same time: - the s/g list that will end up in the skb - the avail/used ring - the virtio buffers - the virtio indirect buffers So you probably need multiple MRU caches. But yes, MRU can help a lot. Paolo -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists