lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Aug 2010 15:46:59 +0100
From:	Chris Webb <chris@...chsys.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Minchan Kim <minchan.kim@...il.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: Over-eager swapping

Wu Fengguang <fengguang.wu@...el.com> writes:

> Did you enable any NUMA policy? That could start swapping even if
> there are lots of free pages in some nodes.

Hi. Thanks for the follow-up. We haven't done any configuration or tuning of
NUMA behaviour, but NUMA support is definitely compiled into the kernel:

  # zgrep NUMA /proc/config.gz 
  CONFIG_NUMA_IRQ_DESC=y
  CONFIG_NUMA=y
  CONFIG_K8_NUMA=y
  CONFIG_X86_64_ACPI_NUMA=y
  # CONFIG_NUMA_EMU is not set
  CONFIG_ACPI_NUMA=y
  # grep -i numa /var/log/dmesg.boot 
  NUMe: Allocated memnodemap from b000 - 1b540
  NUMA: Using 20 for the hash shift.

> Are your free pages equally distributed over the nodes? Or limited to
> some of the nodes? Try this command:
> 
>         grep MemFree /sys/devices/system/node/node*/meminfo

My worst-case machines current have swap completely turned off to make them
usable for clients, but I have one machine which is about 3GB into swap with
8GB of buffers and 3GB free. This shows

  # grep MemFree /sys/devices/system/node/node*/meminfo
  /sys/devices/system/node/node0/meminfo:Node 0 MemFree:          954500 kB
  /sys/devices/system/node/node1/meminfo:Node 1 MemFree:         2374528 kB

I could definitely imagine that one of the nodes could have dipped down to
zero in the past. I'll try enabling swap on one of our machines with the bad
problem late tonight and repeat the experiment. The node meminfo on this box
currently looks like

  # grep MemFree /sys/devices/system/node/node*/meminfo
  /sys/devices/system/node/node0/meminfo:Node 0 MemFree:           82732 kB
  /sys/devices/system/node/node1/meminfo:Node 1 MemFree:         1723896 kB

Best wishes,

Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ