lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Mar 2018 11:07:16 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Tom Lendacky <thomas.lendacky@....com>
Cc:     Kai Huang <kai.huang@...ux.intel.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page()

On 03/05/2018 08:26 AM, Kirill A. Shutemov wrote:
> +void free_encrypt_page(struct page *page, int keyid, unsigned int order)
> +{
> +	int i;
> +	void *v;
> +
> +	for (i = 0; i < (1 << order); i++) {
> +		v = kmap_atomic_keyid(page, keyid + i);
> +		/* See comment in prep_encrypt_page() */
> +		clflush_cache_range(v, PAGE_SIZE);
> +		kunmap_atomic(v);
> +	}
> +}

Have you measured how slow this is?

It's an optimization, but can we find a way to only do this dance when
we *actually* change the keyid?  Right now, we're doing mapping at alloc
and free, clflushing at free and zeroing at alloc.  Let's say somebody does:

	ptr = malloc(PAGE_SIZE);
	*ptr = foo;
	free(ptr);

	ptr = malloc(PAGE_SIZE);
	*ptr = bar;
	free(ptr);

And let's say ptr is in encrypted memory and that we actually munmap()
at free().  We can theoretically skip the clflush, right?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ