[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161127022506.GW1555@ZenIV.linux.org.uk>
Date: Sun, 27 Nov 2016 02:25:09 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [git pull] vfs fix
On Sat, Nov 26, 2016 at 05:48:54PM -0800, Linus Torvalds wrote:
> That's what all the other users do, and that's what should be the
> "right usage pattern", afaik. The number of pages really *is*
> calculated as
>
> int n = DIV_ROUND_UP(result + offs, PAGE_SIZE);
>
> in other iov_iter_get_pages_alloc() callers, although tghe nfs code
> open-codes it as
>
> npages = (result + pgbase + PAGE_SIZE - 1) / PAGE_SIZE;
>
> so it's not a very strong pattern.
Two issues here. One is that iov_iter_get_pages{,_alloc}() calling
conventions are fucking ugly. I'm guilty of that atrocity; my only
excuse is that this thing has congealed from many open-coded instances,
quite a few of those appearing only after considerable massage of the
code. I _hate_ the boilerplate we have in the functions implementing
those for various iov_iter flavours and boilerplate in the callers.
I am going to try and come up with something less atrocious. As it
is, renaming that variable and adding it to the return value of
iov_iter_get_pages_alloc() is certainly not a problem and would be
prettier, but TBH I just went "yet another place to go into that cleanup".
Shouldn't have.
Another thing is that it was a leftover from "Alexei, could you see if
that thing fixes your reproducer?" - just in case the things _really_ went
insane and it was not a wrong rounding but somehow completely buggered
pipe_get_pages_alloc(). They hadn't.
Anyway, leaving that BUG_ON() had been wrong; I can send a followup
massaging that thing as you've suggested, if you are interested in
that. But keep in mind that the whole iov_iter_get_pages...() calling
conventions are going to be changed, hopefully soon.
Powered by blists - more mailing lists