[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <569C718A.9090802@linux.intel.com>
Date: Mon, 18 Jan 2016 13:00:58 +0800
From: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Sasha Levin <sasha.levin@...cle.com>,
Gleb Natapov <gleb@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
syzkaller <syzkaller@...glegroups.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>
Subject: Re: kvm: access to invalid memory in mmu_zap_unsync_children
On 01/16/2016 01:06 AM, Paolo Bonzini wrote:
>
>
> On 15/01/2016 17:54, Sasha Levin wrote:
>> Hi all,
>>
>> While fuzzing with syzkaller on the latest -next kernel running on a KVM tools
>> guest, I've hit the following invalid memory access:
>>
>> [ 547.956284] UBSAN: Undefined behaviour in arch/x86/kvm/mmu.c:2011:17
>>
>> [ 547.956940] index 3 is out of range for type 'kvm_mmu_page *[3]'
>>
>> [ 547.957567] CPU: 0 PID: 21577 Comm: syz-executor Tainted: G D 4.4.0-next-20160114-sasha-00021-gf1273d1-dirty #2798
>>
>> [ 547.958739] 1ffff1001819be5c 000000002fa0e55b ffff8800c0cdf360 ffffffff83433c4e
>>
>> [ 547.972448] 0000000041b58ab3 ffffffff8f960c38 ffffffff83433b86 ffff8800c0cdf328
>>
>> [ 547.973277] 0000000000000001 000000002fa0e55b ffffffff8feb8440 ffff8800c0cdf3f0
>>
>> [ 547.974102] Call Trace:
>>
>> [ 547.974424] dump_stack (lib/dump_stack.c:52)
>> [ 547.975774] ubsan_epilogue (lib/ubsan.c:165)
>> [ 547.976408] __ubsan_handle_out_of_bounds (lib/ubsan.c:382)
>> [ 547.980877] mmu_zap_unsync_children (arch/x86/kvm/mmu.c:2011 arch/x86/kvm/mmu.c:2272)
>
> Marcelo/Takuya/Xiao,
>
> do you know what's the point in the assignment in kvm_mmu_pages_init?
>
> It seems to me that it should be
>
> parents->parent[0] = NULL;
>
> since the only user of the ->parent[] array, mmu_pages_clear_parents,
> walks the array up from parents->parent[0].
>
Yes, it is bugly and it surprised me that it was not triggered in nested env.
> Any other opinions?
The idea we use the array as [PT64_ROOT_LEVEL-1] is because we never take
the last level (level = 1) into account.
I think this diff can fix this, but it has not tested yet.
+#define INVALID_INDEX (-1)
+
static int mmu_unsync_walk(struct kvm_mmu_page *sp,
struct kvm_mmu_pages *pvec)
{
if (!sp->unsync_children)
return 0;
- mmu_pages_add(pvec, sp, 0);
+ /*
+ * do not count the index in the parent of the sp we're
+ * walking start from.
+ */
+ mmu_pages_add(pvec, sp, INVALID_INDEX);
return __mmu_unsync_walk(sp, pvec);
}
@@ -1980,8 +1986,11 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec,
return n;
}
- parents->parent[sp->role.level-2] = sp;
- parents->idx[sp->role.level-1] = pvec->page[n].idx;
+ parents->parent[sp->role.level - 2] = sp;
+
+ /* skip setting idex of the sp we start from. */
+ if (pvec->page[n].idx != INVALID_INDEX)
+ parents->idx[sp->role.level - 1] = pvec->page[n].idx;
}
return n;
@@ -1999,6 +2008,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents)
if (!sp)
return;
+ WARN_ON(idx != INVALID_INDEX);
clear_unsync_child_bit(sp, idx);
level++;
} while (level < PT64_ROOT_LEVEL-1 && !sp->unsync_children);
@@ -2008,7 +2018,7 @@ static void kvm_mmu_pages_init(struct kvm_mmu_page *parent,
struct mmu_page_path *parents,
struct kvm_mmu_pages *pvec)
{
- parents->parent[parent->role.level-1] = NULL;
+ parents->parent[parent->role.level - 2] = NULL;
pvec->nr = 0;
}
>
> Paolo
>
Powered by blists - more mailing lists