[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5B5FBF4C.3030605@huawei.com>
Date: Tue, 31 Jul 2018 09:45:48 +0800
From: piaojun <piaojun@...wei.com>
To: Dominique Martinet <asmadeus@...ewreck.org>
CC: <v9fs-developer@...ts.sourceforge.net>,
<linux-fsdevel@...r.kernel.org>, Greg Kurz <groug@...d.org>,
Matthew Wilcox <willy@...radead.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [V9fs-developer] [PATCH 2/2] net/9p: add a per-client fcall
kmem_cache
On 2018/7/31 9:35, Dominique Martinet wrote:
> piaojun wrote on Tue, Jul 31, 2018:
>> Could you help paste some test result before-and-after the patch applied?
>
> The only performance tests I did were sent to the list a couple of mails
> earlier, you can find it here:
> http://lkml.kernel.org/r/20180730093101.GA7894@nautica
>
> In particular, the results for benchmark on small writes just before and
> after this patch, without KASAN (these are the same numbers as in the
> link, hardware/setup is described there):
> - no alloc (4.18-rc7 request cache): 65.4k req/s
> - non-power of two alloc, no patch: 61.6k req/s
> - power of two alloc, no patch: 62.2k req/s
> - non-power of two alloc, with patch: 64.7k req/s
> - power of two alloc, with patch: 65.1k req/s
>
> I'm rather happy with the result, I didn't expect using a dedicated
> cache would bring this much back but it's certainly worth it.
>
It looks like an obvious promotion.
>>> @@ -1011,6 +1034,7 @@ void p9_client_destroy(struct p9_client *clnt)
>>>
>>> p9_tag_cleanup(clnt);
>>>
>>> + kmem_cache_destroy(clnt->fcall_cache);
>>
>> We could set NULL for fcall_cache in case of use-after-free.
>>
>>> kfree(clnt);
>
> Hmm, I understand where this comes from, but I'm not sure I agree.
> If someone tries to access the client while/after it is freed things are
> going to break anyway, I'd rather let things break as obviously as
> possible than try to cover it up.
>
Setting NULL is not a big matter, and I will hear others' opinion.
Powered by blists - more mailing lists