[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZypCuSRW9VJEEWnr@google.com>
Date: Tue, 5 Nov 2024 16:07:21 +0000
From: Quentin Perret <qperret@...gle.com>
To: kernel test robot <lkp@...el.com>
Cc: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Joey Gouly <joey.gouly@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, oe-kbuild-all@...ts.linux.dev,
Fuad Tabba <tabba@...gle.com>,
Vincent Donnefort <vdonnefort@...gle.com>,
Sebastian Ene <sebastianene@...gle.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 18/18] KVM: arm64: Plumb the pKVM MMU in KVM
On Tuesday 05 Nov 2024 at 13:53:22 (+0800), kernel test robot wrote:
> Hi Quentin,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on v6.12-rc6]
> [also build test WARNING on linus/master]
> [cannot apply to kvmarm/next next-20241104]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Quentin-Perret/KVM-arm64-Change-the-layout-of-enum-pkvm_page_state/20241104-213817
> base: v6.12-rc6
> patch link: https://lore.kernel.org/r/20241104133204.85208-19-qperret%40google.com
> patch subject: [PATCH 18/18] KVM: arm64: Plumb the pKVM MMU in KVM
> config: arm64-randconfig-002-20241105 (https://download.01.org/0day-ci/archive/20241105/202411051325.EBkzE0th-lkp@intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 14.1.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241105/202411051325.EBkzE0th-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@...el.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202411051325.EBkzE0th-lkp@intel.com/
>
> All warnings (new ones prefixed by >>):
>
> >> arch/arm64/kvm/mmu.c:338: warning: Function parameter or struct member 'pgt' not described in 'kvm_s2_unmap'
> >> arch/arm64/kvm/mmu.c:338: warning: Function parameter or struct member 'addr' not described in 'kvm_s2_unmap'
> >> arch/arm64/kvm/mmu.c:338: warning: expecting prototype for __unmap_stage2_range(). Prototype was for kvm_s2_unmap() instead
>
>
> vim +338 arch/arm64/kvm/mmu.c
>
> 299
> 300 /*
> 301 * Unmapping vs dcache management:
> 302 *
> 303 * If a guest maps certain memory pages as uncached, all writes will
> 304 * bypass the data cache and go directly to RAM. However, the CPUs
> 305 * can still speculate reads (not writes) and fill cache lines with
> 306 * data.
> 307 *
> 308 * Those cache lines will be *clean* cache lines though, so a
> 309 * clean+invalidate operation is equivalent to an invalidate
> 310 * operation, because no cache lines are marked dirty.
> 311 *
> 312 * Those clean cache lines could be filled prior to an uncached write
> 313 * by the guest, and the cache coherent IO subsystem would therefore
> 314 * end up writing old data to disk.
> 315 *
> 316 * This is why right after unmapping a page/section and invalidating
> 317 * the corresponding TLBs, we flush to make sure the IO subsystem will
> 318 * never hit in the cache.
> 319 *
> 320 * This is all avoided on systems that have ARM64_HAS_STAGE2_FWB, as
> 321 * we then fully enforce cacheability of RAM, no matter what the guest
> 322 * does.
> 323 */
> 324 /**
> 325 * __unmap_stage2_range -- Clear stage2 page table entries to unmap a range
> 326 * @mmu: The KVM stage-2 MMU pointer
> 327 * @start: The intermediate physical base address of the range to unmap
> 328 * @size: The size of the area to unmap
> 329 * @may_block: Whether or not we are permitted to block
> 330 *
> 331 * Clear a range of stage-2 mappings, lowering the various ref-counts. Must
> 332 * be called while holding mmu_lock (unless for freeing the stage2 pgd before
> 333 * destroying the VM), otherwise another faulting VCPU may come in and mess
> 334 * with things behind our backs.
> 335 */
> 336
> 337 static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
> > 338 {
> 339 return KVM_PGT_S2(unmap, pgt, addr, size);
> 340 }
> 341
Oops, yes, that broke the kerneldoc comment, I'll fix in v2.
Powered by blists - more mailing lists