[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17439540.2334.1522773387555.JavaMail.zimbra@efficios.com>
Date: Tue, 3 Apr 2018 12:36:27 -0400 (EDT)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Andy Lutomirski <luto@...capital.net>,
Dave Watson <davejwatson@...com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-api <linux-api@...r.kernel.org>,
Paul Turner <pjt@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Russell King <linux@....linux.org.uk>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Andrew Hunter <ahh@...gle.com>,
Andi Kleen <andi@...stfloor.org>, Chris Lameter <cl@...ux.com>,
Ben Maurer <bmaurer@...com>, rostedt <rostedt@...dmis.org>,
Josh Triplett <josh@...htriplett.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Michael Kerrisk <mtk.manpages@...il.com>,
Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: [RFC PATCH for 4.17 02/21] rseq: Introduce restartable
sequences system call (v12)
----- On Apr 2, 2018, at 11:33 AM, Mathieu Desnoyers mathieu.desnoyers@...icios.com wrote:
> ----- On Apr 1, 2018, at 12:13 PM, One Thousand Gnomes
> gnomes@...rguk.ukuu.org.uk wrote:
>
>> On Tue, 27 Mar 2018 12:05:23 -0400
>> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
>>
>>> Expose a new system call allowing each thread to register one userspace
>>> memory area to be used as an ABI between kernel and user-space for two
>>> purposes: user-space restartable sequences and quick access to read the
>>> current CPU number value from user-space.
>>
>> What is the *worst* case timing achievable by using the atomics ? What
>> does it do to real time performance requirements ?
>
> Given that there are two system calls introduced in this series (rseq and
> cpu_opv), can you clarify which system call you refer to in the two questions
> above ?
>
> For rseq, given that its userspace works pretty much like a read seqlock
> (it retries on failure), it has no impact whatsoever on scheduler behavior.
> So characterizing its worst case timing does not appear to be relevant.
>
>> For cpu_opv you now
>> give an answer but your answer is assuming there isn't another thread
>> actively thrashing the cache or store buffers, and that the user didn't
>> sneakily pass in a page of uncacheable memory (eg framebuffer, or GPU
>> space).
>
> Are those considered as device pages ?
>
>>
>> I don't see anything that restricts it to cached pages. With that check
>> in place for x86 at least it would probably be ok and I think the sneaky
>> attacks to make it uncacheable would fail becuase you've got the pages
>> locked so trying to give them to an accelerator will block until you are
>> done.
>>
>> I still like the idea it's just the latencies concern me.
>
> Indeed, cpu_opv touches pages that are shared with user-space with
> preemption off, so this one affects the scheduler latency. The worse-case
> timings I measured for cpu_opv were with cache-cold memory. So I expect that
> another thread actively trashing the cache would be in the same ballpark
> figure. It does not account for a concurrent thread thrashing the store
> buffers though.
>
> The checks enforcing which pages can be touched by cpu_opv operations are
> done within cpu_op_check_page(). is_zone_device_page() is used to ensure no
> device page is touched with preempt disabled. I understand that you would
> prefer to disallow pages of uncacheable memory as well, which I'm fine with.
> Is there an API similar to is_zone_device_page() to check whether a page is
> uncacheable ?
Looking into this a bit more, I notice the following: The pgprot_noncached
(_PAGE_NOCACHE on x86) pgprot is part of the vma->vm_page_prot. Therefore,
in order to have userspace provide pointers to noncached pages as input
to cpu_opv, they need to be part of a userspace vma which has a
pgprot_noncached vm_page_prot.
The cpu_opv system call uses get_user_pages_fast() to grab the struct page
from the userspace addresses, and then passes those pages to vm_map_ram(),
with a PAGE_KERNEL pgprot. This creates a temporary kernel mapping to those
pages, which is then used to read/write from/to those pages with preemption
disabled.
Therefore, with the proposed cpu_opv implementation, the kernel is not
touching noncached mappings with preemption disabled, which should take
care of your latency concern.
Am I missing something ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists