[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210225205820.GC2858050@casper.infradead.org>
Date: Thu, 25 Feb 2021 20:58:20 +0000
From: Matthew Wilcox <willy@...radead.org>
To: linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Subject: Freeing page tables through RCU
In order to walk the page tables without the mmap semaphore, it must
be possible to prevent them from being freed and reused (eg if munmap()
races with viewing /proc/$pid/smaps).
There is various commentary within the mm on how to prevent this. One way
is to disable interrupts, relying on that to block rcu_sched or IPIs.
I don't think the RT people are terribly happy about reading a proc file
disabling interrupts, and it doesn't work for architectures that free
page tables directly instead of batching them into an rcu_sched (because
the IPI may not be sent to this CPU if the task has never run on it).
See "Fast GUP" in mm/gup.c
Ideally, I'd like rcu_read_lock() to delay page table reuse. This is
close to trivial for architectures which use entire pages or multiple
pages for levels of their page tables as we can use the rcu_head embedded
in struct page to queue the page for RCU.
s390 and powerpc are the only two architectures I know of that have
levels of their page table that are smaller than their PAGE_SIZE.
I'd like to discuss options. There may be a complicated scheme that
allows partial pages to be freed via RCU, but I have something simpler
in mind. For powerpc in particular, it can have a PAGE_SIZE of 64kB
and then the MMU wants to see 4kB entries in the PMD. I suggest that
instead of allocating each 4kB entry individually, we allocate a 64kB
page and fill in 16 consecutive PMDs. This could cost a bit more memory
(although if you've asked for a CONFIG_PAGE_SIZE of 64kB, you presumably
don't care too much about it), but it'll make future page faults cheaper
(as the PMDs will already be present, assuming you have good locality
of reference).
I'd like to hear better ideas than this.
Powered by blists - more mailing lists