lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 11 Jun 2016 11:39:48 -0600
From:	Logan Gunthorpe <logang@...tatee.com>
To:	"Rafael J. Wysocki" <rjw@...ysocki.net>
Cc:	Kees Cook <keescook@...omium.org>,
	"Rafael J. Wysocki" <rafael@...nel.org>,
	Stephen Smalley <sds@...ho.nsa.gov>,
	Ingo Molnar <mingo@...nel.org>, Ingo Molnar <mingo@...hat.com>,
	the arch/x86 maintainers <x86@...nel.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andy Lutomirski <luto@...nel.org>,
	Borislav Petkov <bp@...en8.de>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Brian Gerst <brgerst@...il.com>
Subject: Re: PROBLEM: Resume form hibernate broken by setting NX on gap

Hey Rafael,

I tried this patch as well and there was no change.

I have a couple tentative observations to make though. None of this is 
100% clear to me so please correct me if I'm wrong anywhere:

1) Commit ab76f7b4ab only extends the NX bit between __ex_table and 
rodata; which, by my understanding, shouldn't be used by anything. And 
__ex_table and rodata are fixed by the kernel's binary so both symbols 
should be the same in both the image kernel and the boot kernel given 
that both are running from the same binary.

2) When ab76f7b4ab is reverted, hibernation seems to work 100%. Though, 
when it's in place, it only works some of the time. Given that commit is 
only extending the NX region a bit, if there is some random mismatch, 
why does it never reach rodata? In other words, why is rodata a magic 
line that seems to work all the time -- why doesn't this random mismatch 
ever extend into the rodata region? rodata isn't _that_ far away from 
the end of ex_table.

Anyway, thanks again for looking into this.

Logan


On 10/06/16 07:47 PM, Rafael J. Wysocki wrote:
> On Saturday, June 11, 2016 02:13:45 AM Rafael J. Wysocki wrote:
>> On Saturday, June 11, 2016 12:33:31 AM Rafael J. Wysocki wrote:
>>> On Friday, June 10, 2016 04:28:01 PM Logan Gunthorpe wrote:
>>>>
>>>> On 10/06/16 04:29 PM, Rafael J. Wysocki wrote:
>>>>> OK, I have a theory, but I need a bit of help.
>>>>>
>>>>> This may be a dumb question, but I don't quite remember the answer readily.
>>>>>
>>>>> Given a physical address, how do I get the corresponding virtual one under
>>>>> the kernel identity mapping?
>>>>
>>>> Is that not just 'phys_to_virt(x)'??
>>>
>>> Ah that.  Thanks!
>>
>> So first, appended is a cleanup of the 64-bit hibernate/restore code that hopefully
>> will make it a bit clearer what happens there.
>>
>> In particular, it follows from it quite clearly that the restore will only work
>> if the virtual address of the trampoline (&restore_registers) in the image
>> kernel happens to match the virtual address of the same chunk of memory in the
>> boot kernel (and after it has switched to the temporary page tables).
>>
>> That appears to be the case most of the time, or hibernation would randomly
>> break for people all over, but I'm not actually sure if that's guaranteed to
>> happen.  If not, that very well may explain unexpected failures in there.
>>
>> If it is not guaranteed to happen, the solution would be to pass the physical
>> address of that memory from the image kernel to the boot kernel instead of its
>> virtual address,
>
> OK, the appended patch does the trick for me.
>
> Logan, can you please try it?  It shouldn't break things, but I'm wondering if
> the problem you're having after commit ab76f7b4ab is still reproducible with it.
>
> Thanks,
> Rafael
>
> ---
> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> Subject: [PATCH] x86 / hibernate: Pass physical address of the trampoline
>
> During resume from hibernation, the boot kernel has to jump to
> the image kernel's trampoline code to pass control to it.  On
> x86_64 the address of that code is passed from the image kernel
> to the boot kernel in the image header.
>
> Currently, the way in which that address is saved by the image kernel
> is not entirely straightforward (assembly code is involved in that
> unnecessarily etc).  Moreover, the current code passes the virtual
> address of the trampoline with the assumption that the image and boot
> kernels will always use the same virtual addresses for the chunk of
> physical address space occupied by the trampoline code.
>
> In case that assumption is not valid, pass the physical address of
> the trampoline code from the image kernel to the boot kernel and
> update RESTORE_MAGIC to avoid situations in which the boot kernel
> may use a different trampoline address passing convention from the
> one used by the image kernel.
>
> Also drop the assembly code chunk updating restore_jump_address
> in swsusp_arch_suspend() as it is not necessary any more.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> ---
>   arch/x86/power/hibernate_64.c     |    8 ++++----
>   arch/x86/power/hibernate_asm_64.S |    3 ---
>   2 files changed, 4 insertions(+), 7 deletions(-)
>
> Index: linux-pm/arch/x86/power/hibernate_64.c
> ===================================================================
> --- linux-pm.orig/arch/x86/power/hibernate_64.c
> +++ linux-pm/arch/x86/power/hibernate_64.c
> @@ -27,7 +27,7 @@ extern asmlinkage __visible int restore_
>    * Address to jump to in the last phase of restore in order to get to the image
>    * kernel's text (this value is passed in the image header).
>    */
> -unsigned long restore_jump_address __visible;
> +void *restore_jump_address __visible;
>
>   /*
>    * Value of the cr3 register from before the hibernation (this value is passed
> @@ -113,7 +113,7 @@ struct restore_data_record {
>   	unsigned long magic;
>   };
>
> -#define RESTORE_MAGIC	0x0123456789ABCDEFUL
> +#define RESTORE_MAGIC	0x0123456789ABCDF0UL
>
>   /**
>    *	arch_hibernation_header_save - populate the architecture specific part
> @@ -126,7 +126,7 @@ int arch_hibernation_header_save(void *a
>
>   	if (max_size < sizeof(struct restore_data_record))
>   		return -EOVERFLOW;
> -	rdr->jump_address = restore_jump_address;
> +	rdr->jump_address = (unsigned long)__pa_symbol(&restore_registers);
>   	rdr->cr3 = restore_cr3;
>   	rdr->magic = RESTORE_MAGIC;
>   	return 0;
> @@ -141,7 +141,7 @@ int arch_hibernation_header_restore(void
>   {
>   	struct restore_data_record *rdr = addr;
>
> -	restore_jump_address = rdr->jump_address;
> +	restore_jump_address = (void *)(rdr->jump_address + __START_KERNEL_map);
>   	restore_cr3 = rdr->cr3;
>   	return (rdr->magic == RESTORE_MAGIC) ? 0 : -EINVAL;
>   }
> Index: linux-pm/arch/x86/power/hibernate_asm_64.S
> ===================================================================
> --- linux-pm.orig/arch/x86/power/hibernate_asm_64.S
> +++ linux-pm/arch/x86/power/hibernate_asm_64.S
> @@ -44,9 +44,6 @@ ENTRY(swsusp_arch_suspend)
>   	pushfq
>   	popq	pt_regs_flags(%rax)
>
> -	/* save the address of restore_registers */
> -	movq	$restore_registers, %rax
> -	movq	%rax, restore_jump_address(%rip)
>   	/* save cr3 */
>   	movq	%cr3, %rax
>   	movq	%rax, restore_cr3(%rip)
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ