[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Y+QgNWub4uRZodCz4QTQm6SA-AkuWS+0XvK28YWNvd6g@mail.gmail.com>
Date: Mon, 18 Jan 2016 20:31:10 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Andrey Ryabinin <ryabinin.a.a@...il.com>
Cc: syzkaller <syzkaller@...glegroups.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Drysdale <drysdale@...gle.com>,
Kees Cook <keescook@...gle.com>,
Quentin Casasnovas <quentin.casasnovas@...cle.com>,
Sasha Levin <sasha.levin@...cle.com>,
Vegard Nossum <vegard.nossum@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Tavis Ormandy <taviso@...gle.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v2] kernel: add kcov code coverage
On Mon, Jan 18, 2016 at 2:34 PM, Andrey Ryabinin <ryabinin.a.a@...il.com> wrote:
> 2016-01-15 17:07 GMT+03:00 Dmitry Vyukov <dvyukov@...gle.com>:
>>>>> Note that this works only for cache-coherent architectures.
>>>>> For incoherent arches you'll need to flush_dcache_page() somewhere.
>>>>> Perhaps it could be done on exit to userspace, since flushing here is
>>>>> certainly an overkill.
>>>>
>>>> I can say that I understand the problem. Does it have to do with the
>>>> fact that the buffer is shared between kernel and user-space?
>>>> Current code is OK from the plain multi-threading side, as user must
>>>> not read buffer concurrently with writing (that would not yield
>>>> anything useful).
>>>
>>> It's not about SMP.
>>> This problem is about virtually indexed aliasing D-caches and could be
>>> observed on uniprocessor system.
>>> You have 3 virtual addresses (user-space, linear mapping and vmalloc)
>>> mapped to the same physical page.
>>> With aliasing cache it's possible to have multiple cache-lines
>>> representing the same physical page.
>>> So the kernel might not see the update made by userspace and vise
>>> versa because kernel/userspace use different virtual addresses.
>>>
>>> And btw, flush_dcache_page() would be a wrong choice, since kcov_area
>>> is a vmalloc address, not a linear address.
>>> So we need something that flushes vmalloc addresses.
>>>
>>> Alternatively we could simply mlock that memory and talk to user space
>>> via get/put_user(). No flush will be required.
>>> And we will avoid another potential problem - lack of vmalloc address
>>> space on 32-bits.
>>
>> Do you mean that user-space allocates a buffer and passes this buffer
>> to ioctl(KCOV_INIT); kernel locks this range and then directly writes
>> to it?
>>
>
> It's one of the ways of doing this. Another possible way is to
> allocate, mmap and pin pages in kcov_mmap().
Which means that we can hide it under the same interface, right?
I preallocate all pages in kcov_mmap() in v4 as suggested by Kirill.
If we can hide it under the current interface, then I would prefer to
do locking later in subsequent patches (probably the ones that port
kcov to an arch that require flush).
>> I afraid it becomes prohibitively expensive with put_user/get_user:
>> https://gist.githubusercontent.com/dvyukov/568f2e4a61afc910f880/raw/540cc071f1d561b9a3f9e50183d681be265af8c3/gistfile1.txt
>>
>
> Right, but it should be better with __get_user/__put_user.
>
>> Also, won't it require the same flush since the region is mmaped into
>> several processes (and process that reads is not the one that setups
>> the region)?
>
> But it's only child process that could inherit kcov mapping from
> parent, so it's be the same physical->virtual mapping as in parent.
ack
>> Size of coverage buffer that I currently use is 64K. I hope it is not
>> a problem for 32-bit archs.
>>
>
> 64K - per process. It's hard to whether this is a real problem or not,
> since it depends
> on how many processes collect coverage, size of vmalloc and vmalloc's
> utilization by the rest of the kernel.
Powered by blists - more mailing lists