lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 09 Oct 2020 19:43:13 -0700 From: James Bottomley <James.Bottomley@...senPartnership.com> To: Eric Biggers <ebiggers@...nel.org>, ira.weiny@...el.com Cc: Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, linux-aio@...ck.org, linux-efi@...r.kernel.org, kvm@...r.kernel.org, linux-doc@...r.kernel.org, linux-mmc@...r.kernel.org, Dave Hansen <dave.hansen@...ux.intel.com>, dri-devel@...ts.freedesktop.org, linux-mm@...ck.org, target-devel@...r.kernel.org, linux-mtd@...ts.infradead.org, linux-kselftest@...r.kernel.org, samba-technical@...ts.samba.org, ceph-devel@...r.kernel.org, drbd-dev@...ts.linbit.com, devel@...verdev.osuosl.org, linux-cifs@...r.kernel.org, linux-nilfs@...r.kernel.org, linux-scsi@...r.kernel.org, linux-nvdimm@...ts.01.org, linux-rdma@...r.kernel.org, x86@...nel.org, amd-gfx@...ts.freedesktop.org, linux-afs@...ts.infradead.org, cluster-devel@...hat.com, linux-cachefs@...hat.com, intel-wired-lan@...ts.osuosl.org, xen-devel@...ts.xenproject.org, linux-ext4@...r.kernel.org, Fenghua Yu <fenghua.yu@...el.com>, ecryptfs@...r.kernel.org, linux-um@...ts.infradead.org, intel-gfx@...ts.freedesktop.org, linux-erofs@...ts.ozlabs.org, reiserfs-devel@...r.kernel.org, linux-block@...r.kernel.org, linux-bcache@...r.kernel.org, Jaegeuk Kim <jaegeuk@...nel.org>, Dan Williams <dan.j.williams@...el.com>, io-uring@...r.kernel.org, linux-nfs@...r.kernel.org, linux-ntfs-dev@...ts.sourceforge.net, netdev@...r.kernel.org, kexec@...ts.infradead.org, linux-kernel@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net, linux-fsdevel@...r.kernel.org, bpf@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, linux-btrfs@...r.kernel.org Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread() On Fri, 2020-10-09 at 14:34 -0700, Eric Biggers wrote: > On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@...el.com wrote: > > From: Ira Weiny <ira.weiny@...el.com> > > > > The kmap() calls in this FS are localized to a single thread. To > > avoid the over head of global PKRS updates use the new > > kmap_thread() call. > > > > Cc: Jaegeuk Kim <jaegeuk@...nel.org> > > Cc: Chao Yu <chao@...nel.org> > > Signed-off-by: Ira Weiny <ira.weiny@...el.com> > > --- > > fs/f2fs/f2fs.h | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h > > index d9e52a7f3702..ff72a45a577e 100644 > > --- a/fs/f2fs/f2fs.h > > +++ b/fs/f2fs/f2fs.h > > @@ -2410,12 +2410,12 @@ static inline struct page > > *f2fs_pagecache_get_page( > > > > static inline void f2fs_copy_page(struct page *src, struct page > > *dst) > > { > > - char *src_kaddr = kmap(src); > > - char *dst_kaddr = kmap(dst); > > + char *src_kaddr = kmap_thread(src); > > + char *dst_kaddr = kmap_thread(dst); > > > > memcpy(dst_kaddr, src_kaddr, PAGE_SIZE); > > - kunmap(dst); > > - kunmap(src); > > + kunmap_thread(dst); > > + kunmap_thread(src); > > } > > Wouldn't it make more sense to switch cases like this to > kmap_atomic()? > The pages are only mapped to do a memcpy(), then they're immediately > unmapped. On a VIPT/VIVT architecture, this is horrendously wasteful. You're taking something that was mapped at colour c_src mapping it to a new address src_kaddr, which is likely a different colour and necessitates flushing the original c_src, then you copy it to dst_kaddr, which is also likely a different colour from c_dst, so dst_kaddr has to be flushed on kunmap and c_dst has to be invalidated on kmap. What we should have is an architectural primitive for doing this, something like kmemcopy_arch(dst, src). PIPT architectures can implement it as the above (possibly losing kmap if they don't need it) but VIPT/VIVT architectures can set up a correctly coloured mapping so they can simply copy from c_src to c_dst without any need to flush and the data arrives cache hot at c_dst. James
Powered by blists - more mailing lists