[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10e8ba5d-1642-48cd-8abb-8abcd280a6cd@email.android.com>
Date: Thu, 27 Feb 2014 18:17:59 -0800
From: "H. Peter Anvin" <hpa@...or.com>
To: Vince Weaver <vincent.weaver@...ne.edu>
CC: Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: perf_fuzzer compiled for x32 causes reboot
Ok... I think we're definitely talking about a cr2 leak. The reboot might be a race condition in the NMI nesting handling maybe?
On February 27, 2014 5:34:34 PM PST, Vince Weaver <vincent.weaver@...ne.edu> wrote:
>On Thu, 27 Feb 2014, H. Peter Anvin wrote:
>
>> On 02/27/2014 02:31 PM, Steven Rostedt wrote:
>> >
>> > Yeah, something is getting mesed up.
>> >
>>
>> What it *looks* like to me is that we try to nest the cr2
>save/restore,
>> which doesn't nest because it is a percpu variable.
>>
>> ... except in the x86-64 case, we *ALSO* save/restore cr2 inside
>> entry_64.S, which makes the stuff in do_nmi completely redundant and
>> there for no good reason.
>>
>> I would actually suggest we do the equivalent on i386 as well.
>>
>> Vince, could you try this patch as an experiment?
>
>OK with your patch applied it does not segfault.
>
>The trace looks like you'd expect too:
>
>perf_fuzzer-2910 [000] 255.355331: page_fault_kernel:
>address=irq_stack_union ip=copy_user_handle_tail error_code=0x0
>perf_fuzzer-2910 [000] 255.355331: function:
>__do_page_fault
>perf_fuzzer-2910 [000] 255.355331: function:
>bad_area_nosemaphore
>perf_fuzzer-2910 [000] 255.355331: function:
>__bad_area_nosemaphore
>perf_fuzzer-2910 [000] 255.355332: function:
>no_context
>perf_fuzzer-2910 [000] 255.355332: function:
> fixup_exception
>perf_fuzzer-2910 [000] 255.355332: function:
> search_exception_tables
>perf_fuzzer-2910 [000] 255.355332: function:
> search_extable
>perf_fuzzer-2910 [000] 255.355333: function:
>perf_output_begin
>perf_fuzzer-2910 [000] 255.355333: bprint:
>perf_output_begin: VMW: event type 2 config 2a st: 2c3e7
>perf_fuzzer-2910 [000] 255.355333: bprint:
>perf_output_begin: VMW: before event->parent (nil)
>perf_fuzzer-2910 [000] 255.355334: bprint:
>perf_output_begin: VMW: before rcu_dereference (nil)
>perf_fuzzer-2910 [000] 255.355334: function:
>__do_page_fault
>perf_fuzzer-2910 [000] 255.355334: function:
>down_read_trylock
>perf_fuzzer-2910 [000] 255.355334: function:
>_cond_resched
>perf_fuzzer-2910 [000] 255.355335: function: find_vma
>perf_fuzzer-2910 [000] 255.355335: function:
>handle_mm_fault
>perf_fuzzer-2910 [000] 255.355335: function:
>__do_fault
>perf_fuzzer-2910 [000] 255.355335: bputs:
>perf_mmap_fault: VMW: perf_mmap_fault
>perf_fuzzer-2910 [000] 255.355336: bprint:
>perf_mmap_fault: VMW: perf_mmap_fault 0xffff8800cba6a980
>perf_fuzzer-2910 [000] 255.355336: function:
>perf_mmap_to_page
>perf_fuzzer-2910 [000] 255.355336: function:
>_cond_resched
>perf_fuzzer-2910 [000] 255.355336: function:
>unlock_page
>perf_fuzzer-2910 [000] 255.355336: function:
>page_waitqueue
>perf_fuzzer-2910 [000] 255.355337: function:
>__wake_up_bit
>perf_fuzzer-2910 [000] 255.355337: bputs:
>perf_mmap_fault: VMW: perf_mmap_fault
>perf_fuzzer-2910 [000] 255.355337: function:
>_cond_resched
>perf_fuzzer-2910 [000] 255.355337: function:
>_raw_spin_lock
>perf_fuzzer-2910 [000] 255.355337: function:
>page_add_file_rmap
>perf_fuzzer-2910 [000] 255.355338: function:
>__inc_zone_page_state
>perf_fuzzer-2910 [000] 255.355338: function:
> __inc_zone_state
>perf_fuzzer-2910 [000] 255.355338: function:
>set_page_dirty
>perf_fuzzer-2910 [000] 255.355338: function:
>page_mapping
>perf_fuzzer-2910 [000] 255.355338: function:
>anon_set_page_dirty
>perf_fuzzer-2910 [000] 255.355339: function:
>unlock_page
>
>
>Vince
--
Sent from my mobile phone. Please pardon brevity and lack of formatting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists