[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55ADF5A2.1020005@intel.com>
Date: Tue, 21 Jul 2015 15:32:50 +0800
From: Pan Xinhui <xinhuix.pan@...el.com>
To: Borislav Petkov <bp@...e.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
x86@...nel.org, toshi.kani@...com, jgross@...e.com,
mcgrof@...e.com, "mnipxh@....com" <mnipxh@....com>
Subject: Re: [PATCH] x86/mm/pat: Do a small optimization in reserve_memtype
hi, Borislav
thanks for your reply :)
On 2015年07月21日 14:55, Borislav Petkov wrote:
> On Tue, Jul 21, 2015 at 02:29:35PM +0800, Pan Xinhui wrote:
>> From: Pan Xinhui <xinhuix.pan@...el.com>
>>
>> It's safe and more reasonable to unlock memtype_lock right after
>> rbt_memtype_check_insert.
>>
>> Signed-off-by: Pan Xinhui <xinhuix.pan@...el.com>
>> ---
>> arch/x86/mm/pat.c | 7 ++-----
>> 1 file changed, 2 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
>> index 188e3e0..cb75639 100644
>> --- a/arch/x86/mm/pat.c
>> +++ b/arch/x86/mm/pat.c
>> @@ -538,20 +538,17 @@ int reserve_memtype(u64 start, u64 end, enum page_cache_mode req_type,
>> new->type = actual_type;
>>
>> spin_lock(&memtype_lock);
>> -
>> err = rbt_memtype_check_insert(new, new_type);
>> + spin_unlock(&memtype_lock);
>> +
>> if (err) {
>> pr_info("x86/PAT: reserve_memtype failed [mem %#010Lx-%#010Lx], track %s, req %s\n",
>> start, end - 1,
>> cattr_name(new->type), cattr_name(req_type));
>> kfree(new);
>> - spin_unlock(&memtype_lock);
>> -
>> return err;
>> }
>>
>> - spin_unlock(&memtype_lock);
>> -
>> dprintk("reserve_memtype added [mem %#010Lx-%#010Lx], track %s, req %s, ret %s\n",
>> start, end - 1, cattr_name(new->type), cattr_name(req_type),
>> new_type ? cattr_name(*new_type) : "-");
>
> While you're at it, please fix a similar issue in lookup_memtype() and also
Let me explain why we can't unlock memtype_lock right after rbt_memtype_lookup in lookup_memtype().
CPUA CPUB
spin_lock(&memtype_lock);
entry = rbt_memtype_lookup(paddr);
spin_unlock(&memtype_lock);
----------------------------------------------------------------------------------------
spin_lock(&memtype_lock);
entry = rbt_memtype_erase(start, end);
spin_unlock(&memtype_lock);
if (!entry) {
printk(KERN_INFO "%s:%d freeing invalid memtype [mem %#010Lx-%#010Lx]\n",
current->comm, current->pid, start, end - 1);
return -EINVAL;
}
kfree(entry);
----------------------------------------------------------------------------------------
if (entry != NULL)
rettype = entry->type;
else
rettype = _PAGE_CACHE_UC_MINUS;
yes, we may access an freed memory at that time. Because entry is stored in rb-tree. Need lock when we access it.
> improve the comments over memtype_lock to explain what exactly it protects.
>
lock is needed when we access the data stored in rb-tree. :)
I find another bug, although it's very hard to hit.
just in reserve_memtype()
----------------------------------
err = rbt_memtype_check_insert(new, new_type);
if (err) {
printk(KERN_INFO "reserve_memtype failed [mem %#010Lx-%#010Lx], track %s, req %s\n",
start, end - 1,
cattr_name(new->type), cattr_name(req_type));
kfree(new);
spin_unlock(&memtype_lock);
return err;
}
spin_unlock(&memtype_lock); //this unlock may cause problems because the next dprintk access *new*
dprintk("reserve_memtype added [mem %#010Lx-%#010Lx], track %s, req %s, ret %s\n",
start, end - 1, cattr_name(new->type), cattr_name(req_type),
new_type ? cattr_name(*new_type) : "-");
----------------------------------
if no err returned, we unlock memtype_lock, *new *is stored is rb-tree. But *new* could be freed at any possible time. race is similar with scenario above.
In the second dprintk, we access *new*, *cattr_name(new->type)*.
I will send patch V2 to fix this issue. I should take a more deep look at this dprintk when I send this patch.
thanks
xinhui
> Thanks.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists