lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 May 2009 15:22:44 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Stefan Lankes <lankes@...s.rwth-aachen.de>
Cc:	linux-kernel@...r.kernel.org, Lee.Schermerhorn@...com,
	linux-numa@...r.kernel.org
Subject: Re: [RFC PATCH 0/4]: affinity-on-next-touch

Stefan Lankes <lankes@...s.rwth-aachen.de> writes:
>
> [Patch 1/4]: Extend the system call madvise with a new parameter
> MADV_ACCESS_LWP (the same as used in Solaris). The specified memory area

Linux does NUMA memory policies in mbind(), not madvise()
Also if there's a new NUMA policy it should be in the standard
Linux NUMA memory policy frame work, not inventing a new one

[I find it amazing that you did apparently so much work
without being familiar with existing Linux NUMA policies]

Your patches seem to have a lot of overlap with 
Lee Schermerhorn's old migrate memory on cpu migration patches.
I don't know the status of those.

> [Patch 4/4]: This part of the patch adds some counters to detect migration
> errors and publishes these counters via /proc/vmstat. Besides this, the
> Kconfig file is extend with the parameter CONFIG_AFFINITY_ON_NEXT_TOUCH.
>
> With this patch, the kernel reduces the overhead of page distribution via
> "affinity-on-next-touch" from 2518ms to 366ms compared to the user-level

The interesting part is less how much faster it is compared to an user
space implementation, but how much this migrate on touch approach
helps in general compared to already existing policies. Some hard
numbers on that would appreciated.

Note that for the OpenMP case old kernels sometimes had trouble because
the threads tended to be not scheduled to the final target CPU
on the first time slice so the memory was often first-touched
on the wrong node. Later kernels avoided that by more aggressively
moving the threads early.

This nearly sounds like a workaround for that (I hope it's more
than that)

If you present any benchmark make sure the kernel you're benching
against does not have this issue.

-Andi
-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ