lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 Feb 2011 15:40:41 +0530
From:	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
To:	Daniel Tiron <dtiron@...ian.armed.us>
Cc:	LKML <linux-kernel@...r.kernel.org>
Subject: Re: Does the scheduler know about the cache topology?

* Daniel Tiron <dtiron@...ian.armed.us> [2011-02-07 10:51:42]:

> Hi all.
> 
> I did some performance tests on a Core 2 Quad machine [1] with QEMU.
> A QEMU instance creates one main thread and one thread for each virtual
> CPU. There were two vms with one CPU each, which make four threads.
> 
> I tried different combinations where I pinned one tgread to one physical
> core with taskset and measured the network performance between the vms
> with iperf [2]. The best result was achieved with each vm (main and CPU
> thread) assigned to one cache group (core 0 & 1 and 2 & 3).
> 
> But it also turns out that letting the scheduler handle the assignment
> works well, too: The results where no pinning was done were just
> slightly below the best. So I was wondering, is the Linux scheduler
> aware of the CPU's cache topology?

Yes, the sched domains are created based on the socket or L2 cache
boundaries.  Scheduler will try to keep the task on same CPU or move
it close enough if it does have to migrate the task.

The CPU topology and cache domains in an SMP system is captured in the
form of sched domain tree with the scheduler, and this structure is
referred during task scheduling and migration.

When running VMs, there is an interesting side effect, the host
scheduler knows the cache domains but not the guest scheduler.  If the
guest scheduler keeps moving tasks between the vcps, then the cache
affinity and benefits could be lost.

> I'm curious to hear your opinion.
> 
> Thanks,
> Daniel
> 
> [1] Core 0 and 1 share one L2 cache and so do 2 and 3
> [2] The topic of my research is networking performance. My interest in
>     cache awareness is only a side effect.

Interrupt delivery and routing may also affect network performance.

--Vaidy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ