[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <088e40fd-3fc7-77dd-a3de-0a2b097d3717@kernel.dk>
Date: Mon, 30 Jan 2023 15:11:39 -0700
From: Jens Axboe <axboe@...nel.dk>
To: John Hubbard <jhubbard@...dia.com>,
David Howells <dhowells@...hat.com>
Cc: Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
David Hildenbrand <david@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>,
Logan Gunthorpe <logang@...tatee.com>,
Jeff Layton <jlayton@...nel.org>, linux-block@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] iov_iter: Improve page extraction (pin or just list)
On 1/30/23 3:02?PM, John Hubbard wrote:
> On 1/30/23 13:57, Jens Axboe wrote:
>>> This does cause about a 2.7% regression for me, using O_DIRECT on a raw
>>> block device. Looking at a perf diff, here's the top:
>>>
>>> +2.71% [kernel.vmlinux] [k] mod_node_page_state
>>> +2.22% [kernel.vmlinux] [k] iov_iter_extract_pages
>>>
>>> and these two are gone:
>>>
>>> 2.14% [kernel.vmlinux] [k] __iov_iter_get_pages_alloc
>>> 1.53% [kernel.vmlinux] [k] iov_iter_get_pages
>>>
>>> rest is mostly in the noise, but mod_node_page_state() sticks out like
>>> a sore thumb. They seem to be caused by the node stat accounting done
>>> in gup.c for FOLL_PIN.
>>
>> Confirmed just disabling the node_stat bits in mm/gup.c and now the
>> performance is back to the same levels as before.
>>
>> An almost 3% regression is a bit hard to swallow...
>
> This is something that we say when adding pin_user_pages_fast(),
> yes. I doubt that I can quickly find the email thread, but we
> measured it and weren't immediately able to come up with a way
> to make it faster.
>
> At this point, it's a good time to consider if there is any
> way to speed it up. But I wanted to confirm that you're absolutely
> right: the measurement sounds about right, and that's also the
> hotspot that we say, too.
>From spending all of 5 minutes on this, it must be due to exceeding the
pcp stat_threashold, as we then end up doing two atomic_long_adds().
Looking at proc, looks like it's 108. And with this test, then we're
hitting that slow path ~80k/second. Uhm...
--
Jens Axboe
Powered by blists - more mailing lists