[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234401390.7699.29.camel@minggr.sh.intel.com>
Date: Thu, 12 Feb 2009 09:16:30 +0800
From: Lin Ming <ming.m.lin@...el.com>
To: torvalds@...ux-foundation.org
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: specjbb2005 fails with 2.6.29-rc4
specjbb2005 fails with 2.6.29-rc4 in many testing machines.
On a 4*4cores Tigerton machine, run specjb2005 with
starting_number_warehouses = 1 and ending_number_warehouses = 34.
It fails when warehouses=16.
Test with Jrockit VM fails. And it works well if OpenJDK is used
instead.
commit fc8744adc870a8d4366908221508bb113d8b72ee
Author: Linus Torvalds <torvalds@...ux-foundation.org>
Date: Sat Jan 31 15:08:56 2009 -0800
Stop playing silly games with the VM_ACCOUNT flag
The mmap_region() code would temporarily set the VM_ACCOUNT flag for
anonymous shared mappings just to inform shmem_zero_setup() that it
should enable accounting for the resulting shm object. It would then
clear the flag after calling ->mmap (for the /dev/zero case) or doing
shmem_zero_setup() (for the MAP_ANON case).
This just resulted in vma merge issues, but also made for just
unnecessary confusion. Use the already-existing VM_NORESERVE flag for
this instead, and let shmem_{zero|file}_setup() just figure it out from
that.
This also happens to make it obvious that the new DRI2 GEM layer uses a
non-reserving backing store for its object allocation - which is quite
possibly not intentional. But since I didn't want to change semantics
in this patch, I left it alone, and just updated the caller to use the
new flag semantics.
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
---
drivers/gpu/drm/drm_gem.c | 2 +-
ipc/shm.c | 4 +-
mm/mmap.c | 48 +++++++++++++++++++++++---------------------
mm/shmem.c | 2 +-
4 files changed, 29 insertions(+), 27 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists