lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Dec 2012 21:35:06 +0100
From:	Zlatko Calusic <zlatko.calusic@...on.hr>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	Johannes Weiner <hannes@...xchg.org>,
	Rik van Riel <riel@...hat.com>,
	linux-mm <linux-mm@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: kswapd craziness in 3.7

On 10.12.2012 20:13, Linus Torvalds wrote:
> 
> It's worth giving this as much testing as is at all possible, but at
> the same time I really don't think I can delay 3.7 any more without
> messing up the holiday season too much. So unless something obvious
> pops up, I will do the release tonight. So testing will be minimal -
> but it's not like we haven't gone back-and-forth on this several times
> already, and we revert to *mostly* the same old state as 3.6 anyway,
> so it should be fairly safe.
> 

It compiles and boots without a hitch, so it must be perfect. :)

Seriously, a few more hours need to pass, until I can provide more convincing data. That's how long it takes on this particular machine for memory pressure to build up and memory fragmentation to ensue. Only then I'll be able to tell how it really behaves. I promise to get back as soon as I can.

And funny thing that you mention i915, because yesterday my daughter managed to lock up our laptop hard (that was a first), and this is what I found in kern.log after restart:

Dec  9 21:29:42 titan vmunix: general protection fault: 0000 [#1] PREEMPT SMP 
Dec  9 21:29:42 titan vmunix: Modules linked in: vboxpci(O) vboxnetadp(O) vboxnetflt(O) vboxdrv(O) [last unloaded: microcode]
Dec  9 21:29:42 titan vmunix: CPU 2 
Dec  9 21:29:42 titan vmunix: Pid: 2523, comm: Xorg Tainted: G           O 3.7.0-rc8 #1 Hewlett-Packard HP Pavilion dv7 Notebook PC/144B
Dec  9 21:29:42 titan vmunix: RIP: 0010:[<ffffffff81090b9c>]  [<ffffffff81090b9c>] find_get_page+0x3c/0x90
Dec  9 21:29:42 titan vmunix: RSP: 0018:ffff88014d9f7928  EFLAGS: 00010246
Dec  9 21:29:42 titan vmunix: RAX: ffff880052594bc8 RBX: 0200000000000000 RCX: 00000000fffffffa
Dec  9 21:29:42 titan vmunix: RDX: 0000000000000001 RSI: ffff880052594bc8 RDI: 0000000000000000
Dec  9 21:29:42 titan vmunix: RBP: ffff88014d9f7948 R08: 0200000000000000 R09: ffff880052594b18
Dec  9 21:29:42 titan vmunix: R10: 57ffe4cbb74d1280 R11: 0000000000000000 R12: ffff88011c959a90
Dec  9 21:29:42 titan vmunix: R13: 0000000000000053 R14: 0000000000000000 R15: 0000000000000053
Dec  9 21:29:42 titan vmunix: FS:  00007fcd8d413880(0000) GS:ffff880157c80000(0000) knlGS:0000000000000000
Dec  9 21:29:42 titan vmunix: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec  9 21:29:42 titan vmunix: CR2: ffffffffff600400 CR3: 000000014d937000 CR4: 00000000000007e0
Dec  9 21:29:42 titan vmunix: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec  9 21:29:42 titan vmunix: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Dec  9 21:29:42 titan vmunix: Process Xorg (pid: 2523, threadinfo ffff88014d9f6000, task ffff88014d9c1260)
Dec  9 21:29:42 titan vmunix: Stack:
Dec  9 21:29:42 titan vmunix:  ffff88014d9f7958 ffff88011c959a88 0000000000000053 ffff88011c959a88
Dec  9 21:29:42 titan vmunix:  ffff88014d9f7978 ffffffff81090e21 0000000000000001 ffffea00014d1280
Dec  9 21:29:42 titan vmunix:  ffff88011c959960 0000000000000001 ffff88014d9f7a28 ffffffff810a1b60
Dec  9 21:29:42 titan vmunix: Call Trace:
Dec  9 21:29:42 titan vmunix:  [<ffffffff81090e21>] find_lock_page+0x21/0x80
Dec  9 21:29:42 titan vmunix:  [<ffffffff810a1b60>] shmem_getpage_gfp+0xa0/0x620
Dec  9 21:29:42 titan vmunix:  [<ffffffff810a224c>] shmem_read_mapping_page_gfp+0x2c/0x50
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b3611>] i915_gem_object_get_pages_gtt+0xe1/0x270
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b127f>] i915_gem_object_get_pages+0x4f/0x90
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b1383>] i915_gem_object_bind_to_gtt+0xc3/0x4c0
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b4413>] i915_gem_object_pin+0x123/0x190
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b7d97>] i915_gem_execbuffer_reserve_object.isra.13+0x77/0x190
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b8171>] i915_gem_execbuffer_reserve.isra.14+0x2c1/0x320
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b87b2>] i915_gem_do_execbuffer.isra.17+0x5e2/0x11b0
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b9894>] i915_gem_execbuffer2+0x94/0x280
Dec  9 21:29:42 titan vmunix:  [<ffffffff81287de3>] drm_ioctl+0x493/0x530
Dec  9 21:29:42 titan vmunix:  [<ffffffff812b9800>] ? i915_gem_execbuffer+0x480/0x480
Dec  9 21:29:42 titan vmunix:  [<ffffffff810d9cbf>] do_vfs_ioctl+0x8f/0x530
Dec  9 21:29:42 titan vmunix:  [<ffffffff810da1ab>] sys_ioctl+0x4b/0x90
Dec  9 21:29:42 titan vmunix:  [<ffffffff810c9e2d>] ? sys_read+0x4d/0xa0
Dec  9 21:29:42 titan vmunix:  [<ffffffff8154a4d2>] system_call_fastpath+0x16/0x1b
Dec  9 21:29:42 titan vmunix: Code: 63 08 48 83 ec 08 e8 84 9c fb ff 4c 89 ee 4c 89 e7 e8 89 b7 15 00 48 85 c0 48 89 c6 74 41 48 8b 18 48 85 db 74 1f f6 c3 03 75 3c <8b> 53 1c 85 d2 74 d9 8d 7a 01 89 d0 f0 0f b1 7b 1c 39 c2 75 23 
Dec  9 21:29:42 titan vmunix: RIP  [<ffffffff81090b9c>] find_get_page+0x3c/0x90
Dec  9 21:29:42 titan vmunix:  RSP <ffff88014d9f7928>

It seems that whenever (if ever?) GFP_NO_KSWAPD removal is attempted again, the i915 driver will need to be taken better care of.
-- 
Zlatko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ