[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPNVh5chP3Zz+ww8WLb9bUYNikW-1PyOV=3==BM-92BgogaB3w@mail.gmail.com>
Date: Tue, 14 Sep 2021 09:29:00 -0700
From: Peter Oskolkov <posk@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jann Horn <jannh@...gle.com>, Peter Oskolkov <posk@...k.io>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Andrei Vagin <avagin@...gle.com>,
Thierry Delisle <tdelisle@...terloo.ca>
Subject: Re: [PATCH 2/4 v0.5] sched/umcg: RFC: add userspace atomic helpers
On Tue, Sep 14, 2021 at 1:09 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Thu, Sep 09, 2021 at 12:06:58PM -0700, Peter Oskolkov wrote:
> > On Wed, Sep 8, 2021 at 4:39 PM Jann Horn <jannh@...gle.com> wrote:
[...]
>
> Durr.. so yeah this is a bit of a chicken and egg problem here. We need
> a userspace page to notify we're blocked, but at the same time,
> accessing said page can get us blocked.
>
> And then worse, as Jann said, we cannot do this in the appropriate spot
> because we could be blocking on mmap_sem, so we must not require
> mmap_sem to make progress etc.. :/
>
> Now, in reality actually taking a fault for these pages is extremely
> unlikely, but if we do, there's really no option but to block and wait
> for it without notification. Tought luck there.
In the version of the patchset that I'm preparing to send I've decided
to punt on the issue and just ask the userspace to deal with locking
the memory as it sees fit: mlock() is available and as far as I can
tell RLIMIT_MLOCK is decently sized by default (6MB on Ubuntu, so
locked memory can contain more than 100k of structs umcg_task if
nothing else uses it); and if it is not enough for some special case,
it can be adjusted at a higher level in the userspace. If we get a
pagefault when we access struct umcg_task in the kernel, we just kill
the task.
Does the approach seem reasonable for the initial version of the patchset?
>
> So what we can do, is use get_user_page() on the appropriate pages
> (alignment ensure the whole umcg struct must be in a single page etc..)
> the moment a umcg task enters the kernel. For this we need some
> SYSCALL_WORK_ENTER flag.
>
> So normally a task would have ->umcg_page and ->umcg_server_page be
> NULL, the above SYSCALL_WORK_SYSCALL_UMCG flag would get_user_page() the
> self and server pages. If get_user_page() blocks, these fields would
> still be NULL and sched_submit_work() would not do anything, c'est la
> vie.
>
> Once we have the pages, any actual blocking hitting sched_submit_work()
> can do the updates without further blocking. It can then also put_page()
> and clear the ->umcg_{,server_}page pointers, because the task_work that
> will set RUNNABLE *can* suffer mmap_sem (again, unlikely, again tough
> luck if it does).
>
> The reason for put'ing the pages on blocking, is that this guarantees
> the pages are only pinned for a short amount of time, and 'never' by a
> blocked task. IOW, it's a proper transient pin and doesn't require extra
> special care or accounting.
I'd prefer to defer this smart/transient pinning of pages until later
if mlock() will solve the issue at the moment.
> Also, can you *please* convert that RST crud to a text file, it's
> absolutely unreadable gunk. Those documentation files should be readable
> as plain text first and foremost. That whole rendering to html crap is
> nonsense. Using a browser to read a test file is insane.
Will do. Maybe we can have both an RST and a TXT version of the
document? I think most files in /Documentation are RST...
Thanks,
Peter
Powered by blists - more mailing lists