[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201209201415.GT7338@casper.infradead.org>
Date: Wed, 9 Dec 2020 20:14:15 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Ira Weiny <ira.weiny@...el.com>,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
Christoph Hellwig <hch@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
Eric Biggers <ebiggers@...nel.org>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH V2 2/2] mm/highmem: Lift memcpy_[to|from]_page to core
On Wed, Dec 09, 2020 at 11:47:56AM -0800, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 8:03 PM Matthew Wilcox <willy@...radead.org> wrote:
> > On Tue, Dec 08, 2020 at 06:22:50PM -0800, Ira Weiny wrote:
> > > Therefore, I tend to agree with Dan that if anything is to be done it should be
> > > a WARN_ON() which is only going to throw an error that something has probably
> > > been wrong all along and should be fixed but continue running as before.
> >
> > Silent data corruption is for ever. Are you absolutely sure nobody has
> > done:
> >
> > page = alloc_pages(GFP_HIGHUSER_MOVABLE, 3);
> > memcpy_to_page(page, PAGE_SIZE * 2, p, PAGE_SIZE * 2);
> >
> > because that will work fine if the pages come from ZONE_NORMAL and fail
> > miserably if they came from ZONE_HIGHMEM.
>
> ...and violently regress with the BUG_ON.
... which is what we want, no?
> The question to me is: which is more likely that any bad usages have
> been covered up by being limited to ZONE_NORMAL / 64-bit only, or that
> silent data corruption has been occurring with no ill effects?
I wouldn't be at all surprised to learn that there is silent data
corruption on 32-bit systems with HIGHMEM. Would you? How much testing
do you do on 32-bit HIGHMEM systems?
Actually, I wouldn't be at all surprised if we can hit this problem today.
Look at this:
size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
{
char *to = addr;
if (unlikely(iov_iter_is_pipe(i))) {
WARN_ON(1);
return 0;
}
if (iter_is_iovec(i))
might_fault();
iterate_and_advance(i, bytes, v,
copyin((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len),
memcpy_from_page((to += v.bv_len) - v.bv_len, v.bv_page,
v.bv_offset, v.bv_len),
memcpy((to += v.iov_len) - v.iov_len, v.iov_base, v.iov_len)
)
return bytes;
}
EXPORT_SYMBOL(_copy_from_iter);
There's a lot of macrology in there, so for those following along who
aren't familiar with the iov_iter code, if the iter is operating on a
bvec, then iterate_and_advance() will call memcpy_from_page(), passing
it the bv_page, bv_offset and bv_len stored in the bvec. Since 2019,
Linux has supported multipage bvecs (commit 07173c3ec276). So bv_len
absolutely *can* be > PAGE_SIZE.
Does this ever happen in practice? I have no idea; I don't know whether
any multipage BIOs are currently handed to copy_from_iter(). But I
have no confidence in your audit if you didn't catch this one.
> > > FWIW I think this is a 'bad BUG_ON' use because we are "checking something that
> > > we know we might be getting wrong".[1] And because, "BUG() is only good for
> > > something that never happens and that we really have no other option for".[2]
> >
> > BUG() is our only option here. Both limiting how much we copy or
> > copying the requested amount result in data corruption or leaking
> > information to a process that isn't supposed to see it.
>
> At a minimum I think this should be debated in a follow on patch to
> add assertion checking where there was none before. There is no
> evidence of a page being overrun in the audit Ira performed.
If we put in into a separate patch, someone will suggest backing out the
patch which tells us that there's a problem. You know, like this guy ...
https://lore.kernel.org/linux-mm/CAPcyv4jNVroYmirzKw_=CsEixOEACdL3M1Wc4xjd_TFv3h+o8Q@mail.gmail.com/
Powered by blists - more mailing lists