lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871q3a7mf7.ffs@tglx>
Date: Tue, 30 Jul 2024 20:41:00 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: "Alexey Gladkov (Intel)" <legion@...nel.org>,
 linux-kernel@...r.kernel.org, linux-coco@...ts.linux.dev
Cc: Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave
 Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
 "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Andrew Morton
 <akpm@...ux-foundation.org>, Yuan Yao <yuan.yao@...el.com>, Geert
 Uytterhoeven <geert@...ux-m68k.org>, Yuntao Wang <ytcoode@...il.com>, Kai
 Huang <kai.huang@...el.com>, Baoquan He <bhe@...hat.com>, Oleg Nesterov
 <oleg@...hat.com>, Joerg Roedel <jroedel@...e.de>, Tom Lendacky
 <thomas.lendacky@....com>, cho@...rosoft.com, decui@...rosoft.com,
 John.Starks@...rosoft.com
Subject: Re: [PATCH v1 4/4] x86/tdx: Implement movs for MMIO

On Tue, Jul 30 2024 at 19:35, Alexey Gladkov wrote:
> Adapt AMD's implementation of the MOVS instruction. Since the
> implementations are similar, it is possible to reuse the code.
>
> MOVS emulation consists of dividing it into a series of read and write
> operations, which in turn will be validated separately.

Please split this into two patches:

    1) Splitting out the AMD code
    2) Adding it for Intel
> @@ -369,72 +369,17 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
>  static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
>  				   char *dst, char *buf, size_t size)
>  {
> -	unsigned long error_code = X86_PF_PROT | X86_PF_WRITE;
> +	unsigned long error_code;
> +	int ret = __put_iomem(dst, buf, size);

Variable ordering....
  
> +static int handle_mmio_movs(struct insn *insn, struct pt_regs *regs, int size, struct ve_info *ve)
> +{
> +	unsigned long ds_base, es_base;
> +	unsigned char *src, *dst;
> +	unsigned char buffer[8];
> +	int off, ret;
> +	bool rep;
> +
> +	/*
> +	 * The in-kernel code must use a special API that does not use MOVS.
> +	 * If the MOVS instruction is received from in-kernel, then something
> +	 * is broken.
> +	 */
> +	WARN_ON_ONCE(!user_mode(regs));

Then it should return here and not try to continue, no?

> +int __get_iomem(char *src, char *buf, size_t size)
> +{
> +	/*
> +	 * This function uses __get_user() independent of whether kernel or user
> +	 * memory is accessed. This works fine because __get_user() does no
> +	 * sanity checks of the pointer being accessed. All that it does is
> +	 * to report when the access failed.
> +	 *
> +	 * Also, this function runs in atomic context, so __get_user() is not
> +	 * allowed to sleep. The page-fault handler detects that it is running
> +	 * in atomic context and will not try to take mmap_sem and handle the
> +	 * fault, so additional pagefault_enable()/disable() calls are not
> +	 * needed.
> +	 *
> +	 * The access can't be done via copy_from_user() here because
> +	 * mmio_read_mem() must not use string instructions to access unsafe
> +	 * memory. The reason is that MOVS is emulated by the #VC handler by
> +	 * splitting the move up into a read and a write and taking a nested #VC
> +	 * exception on whatever of them is the MMIO access. Using string
> +	 * instructions here would cause infinite nesting.
> +	 */
> +	switch (size) {
> +	case 1: {
> +		u8 d1;
> +		u8 __user *s = (u8 __user *)src;

One line for the variables is enough

		u8 d1, __user *s = (u8 __user *)src;

No?

> +	case 8: {
> +		u64 d8;
> +		u64 __user *s = (u64 __user *)src;
> +		if (__get_user(d8, s))

Lacks newline between variable declaration and code.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ