lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtDXQ6oeQrb8LxkX@google.com>
Date: Thu, 29 Aug 2024 13:18:27 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: pbonzini@...hat.com, dmatlack@...gle.com, kvm@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/4] KVM: x86/mmu: Track TDP MMU NX huge pages separately

On Thu, Aug 29, 2024, Vipin Sharma wrote:
> @@ -871,8 +871,17 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  		return;
>  
>  	++kvm->stat.nx_lpage_splits;
> -	list_add_tail(&sp->possible_nx_huge_page_link,
> -		      &kvm->arch.possible_nx_huge_pages);
> +	if (is_tdp_mmu_page(sp)) {
> +#ifdef CONFIG_X86_64
> +		++kvm->arch.tdp_mmu_possible_nx_huge_pages_count;
> +		list_add_tail(&sp->possible_nx_huge_page_link,
> +			      &kvm->arch.tdp_mmu_possible_nx_huge_pages);
> +#endif

Pass in the count+list, that way there's no #ifdef and no weird questions for
what happens if the impossible happens (is_tdp_mmu_page() on 32-bit KVM).

void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
				 u64 *nr_pages, struct list_head *pages)
{
	/*
	 * If it's possible to replace the shadow page with an NX huge page,
	 * i.e. if the shadow page is the only thing currently preventing KVM
	 * from using a huge page, add the shadow page to the list of "to be
	 * zapped for NX recovery" pages.  Note, the shadow page can already be
	 * on the list if KVM is reusing an existing shadow page, i.e. if KVM
	 * links a shadow page at multiple points.
	 */
	if (!list_empty(&sp->possible_nx_huge_page_link))
		return;

	++kvm->stat.nx_lpage_splits;
	++(*nr_pages);
	list_add_tail(&sp->possible_nx_huge_page_link, pages);
}

> +	} else {
> +		++kvm->arch.possible_nx_huge_pages_count;
> +		list_add_tail(&sp->possible_nx_huge_page_link,
> +			      &kvm->arch.possible_nx_huge_pages);
> +	}
>  }
>  
>  static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
> @@ -906,6 +915,13 @@ void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  		return;
>  
>  	--kvm->stat.nx_lpage_splits;
> +	if (is_tdp_mmu_page(sp)) {
> +#ifdef CONFIG_X86_64
> +		--kvm->arch.tdp_mmu_possible_nx_huge_pages_count;
> +#endif
> +	} else {
> +		--kvm->arch.possible_nx_huge_pages_count;
> +	}

Same thing here.  Only tweak to my proposed API in patch 4 is that it needs to
take nr_pages as a pointer.  Then it can simply pass those along to this helper.

>  	list_del_init(&sp->possible_nx_huge_page_link);
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ