[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200214160342.GA7778@bombadil.infradead.org>
Date: Fri, 14 Feb 2020 08:03:42 -0800
From: Matthew Wilcox <willy@...radead.org>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 13/25] fs: Add zero_user_large
On Fri, Feb 14, 2020 at 04:52:48PM +0300, Kirill A. Shutemov wrote:
> On Tue, Feb 11, 2020 at 08:18:33PM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
> >
> > We can't kmap() a THP, so add a wrapper around zero_user() for large
> > pages.
>
> I would rather address it closer to the root: make zero_user_segments()
> handle compound pages.
Hah. I ended up doing that, but hadn't sent it out. I don't like
how ugly it is:
@@ -219,18 +219,57 @@ static inline void zero_user_segments(struct page *page,
unsigned start1, unsigned end1,
unsigned start2, unsigned end2)
{
- void *kaddr = kmap_atomic(page);
-
- BUG_ON(end1 > PAGE_SIZE || end2 > PAGE_SIZE);
-
- if (end1 > start1)
- memset(kaddr + start1, 0, end1 - start1);
-
- if (end2 > start2)
- memset(kaddr + start2, 0, end2 - start2);
-
- kunmap_atomic(kaddr);
- flush_dcache_page(page);
+ unsigned int i;
+
+ BUG_ON(end1 > thp_size(page) || end2 > thp_size(page));
+
+ for (i = 0; i < hpage_nr_pages(page); i++) {
+ void *kaddr;
+ unsigned this_end;
+
+ if (end1 == 0 && start2 >= PAGE_SIZE) {
+ start2 -= PAGE_SIZE;
+ end2 -= PAGE_SIZE;
+ continue;
+ }
+
+ if (start1 >= PAGE_SIZE) {
+ start1 -= PAGE_SIZE;
+ end1 -= PAGE_SIZE;
+ if (start2) {
+ start2 -= PAGE_SIZE;
+ end2 -= PAGE_SIZE;
+ }
+ continue;
+ }
+
+ kaddr = kmap_atomic(page + i);
+
+ this_end = min_t(unsigned, end1, PAGE_SIZE);
+ if (end1 > start1)
+ memset(kaddr + start1, 0, this_end - start1);
+ end1 -= this_end;
+ start1 = 0;
+
+ if (start2 >= PAGE_SIZE) {
+ start2 -= PAGE_SIZE;
+ end2 -= PAGE_SIZE;
+ } else {
+ this_end = min_t(unsigned, end2, PAGE_SIZE);
+ if (end2 > start2)
+ memset(kaddr + start2, 0, this_end - start2);
+ end2 -= this_end;
+ start2 = 0;
+ }
+
+ kunmap_atomic(kaddr);
+ flush_dcache_page(page + i);
+
+ if (!end1 && !end2)
+ break;
+ }
+
+ BUG_ON((start1 | start2 | end1 | end2) != 0);
}
I think at this point it has to move out-of-line too.
> > +static inline void zero_user_large(struct page *page,
> > + unsigned start, unsigned size)
> > +{
> > + unsigned int i;
> > +
> > + for (i = 0; i < thp_order(page); i++) {
> > + if (start > PAGE_SIZE) {
>
> Off-by-one? >= ?
Good catch; I'd also noticed that when I came to redo the zero_user_segments().
Powered by blists - more mailing lists