lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Mar 2015 22:12:27 +0100
From:	Stefan Seyfried <stefan.seyfried@...glemail.com>
To:	Andy Lutomirski <luto@...capital.net>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Takashi Iwai <tiwai@...e.de>,
	Denys Vlasenko <dvlasenk@...hat.com>, X86 ML <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: PANIC: double fault, error_code: 0x0 in 4.0.0-rc3-2, kvm related?

Am 18.03.2015 um 21:51 schrieb Andy Lutomirski:
> On Wed, Mar 18, 2015 at 1:05 PM, Stefan Seyfried
> <stefan.seyfried@...glemail.com> wrote:

>>> The relevant thread's stack is here (see ti in the trace):
>>>
>>> ffff8801013d4000
>>>
>>> It could be interesting to see what's there.
>>>
>>> I don't suppose you want to try to walk the paging structures to see
>>> if ffff88023bc80000 (i.e. gsbase) and, more specifically,
>>> ffff88023bc80000 + old_rsp and ffff88023bc80000 + kernel_stack are
>>> present?  You'd only have to walk one level -- presumably, if the PGD
>>> entry is there, the rest of the entries are okay, too.
>>
>> That's all greek to me :-)
>>
>> I see that there is something at ffff88023bc80000:
>>
>> crash> x /64xg 0xffff88023bc80000
>> 0xffff88023bc80000:     0x0000000000000000      0x0000000000000000
>> 0xffff88023bc80010:     0x0000000000000000      0x0000000000000000
>> 0xffff88023bc80020:     0x0000000000000000      0x000000006686ada9
>> 0xffff88023bc80030:     0x0000000000000000      0x0000000000000000
>> 0xffff88023bc80040:     0x0000000000000000      0x0000000000000000
>> [all zeroes]
>> 0xffff88023bc801f0:     0x0000000000000000      0x0000000000000000
>>
>> old_rsp and kernel_stack seem bogus:
>> crash> print old_rsp
>> Cannot access memory at address 0xa200
>> gdb: gdb request failed: print old_rsp
>> crash> print kernel_stack
>> Cannot access memory at address 0xaa48
>> gdb: gdb request failed: print kernel_stack
>>
>> kernel_stack is not a pointer? So 0xffff88023bc80000 + 0xaa48 it is:
> 
> Yup.  old_rsp and kernel_stack are offsets relative to gsbase.
> 
>>
>> crash> x /64xg 0xffff88023bc8aa00
>> 0xffff88023bc8aa00:     0x0000000000000000      0x0000000000000000
> 
> [...]
> 
> I don't know enough about crashkernel to know whether the fact that
> this worked means anything.

AFAIK this just means that the memory at this location is included in
the dump :-)

> Can you dump the page of physical memory at 0x4779a067?  That's the PGD.

Unfortunately not, this is a partial dump (I think the default config in
openSUSE, but I might have changed it some time ago) and the dump_level
is 31 which means that the following are excluded:

                     |      |cache  |cache  |      |
                dump | zero |without|with   | user | free
               level | page |private|private| data | page
              -------+------+-------+-------+------+------
                  31 |  X   |   X   |   X   |  X   |  X

so this:
crash> x /64xg 0x4779a067
0x4779a067:     Cannot access memory at address 0x4779a067
gdb: gdb request failed: x /64xg

probably just means, that the PGD falls in one of the above excluded
categories.

Best regards,

	Stefan
-- 
Stefan Seyfried
Linux Consultant & Developer -- GPG Key: 0x731B665B

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ