lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 May 2007 20:24:42 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Christoph Lameter <clameter@....com>
Cc:	linux-kernel@...r.kernel.org,
	Satyam Sharma <satyam.sharma@...il.com>,
	Nate Diller <nate.diller@...il.com>
Subject: Re: Pagecache zeroing: zero_user_segment, zero_user_segments and
 zero_user

On Tue, 15 May 2007 20:00:18 -0700 (PDT) Christoph Lameter <clameter@....com> wrote:

> Simplify page cache zeroing of segments of pages through 3 functions
> 
> 
> zero_user_segments(page, start1, end1, start2, end2)
> 
> 	Zeros two segments of the page. It takes the position where to
> 	start and end the zeroing which avoids length calculations.
> 
> zero_user_segment(page, start, end)
> 
> 	Same for a single segment.
> 
> zero_user(page, start, length)
> 
> 	Length variant for the case where we know the length.
> 
> 
> 
> We remove the zero_user_page macro. Issues:
> 
> 1. Its a macro. Inline functions are preferable.
> 
> 2. It has a useless parameter specifying the kmap slot.
>    The functions above default to KM_USER0 which is also always used when
>    zero_user_page was called except in one single case. We open code that
>    single case to draw attention to the spot.
> 

Dunno.  fwiw, we decided to _not_ embed KM_USER0 in the callee: we have had
some pretty ghastly bugs in the past due to misuse of kmap slots so the
idea was to shove the decision into the caller's face, make them think
about what they're doing

> +static inline void zero_user_segments(struct page *page,
> +	unsigned start1, unsigned end1,
> +	unsigned start2, unsigned end2)
> +{
> +	void *kaddr = kmap_atomic(page, KM_USER0);
> +
> +	BUG_ON(end1 > PAGE_SIZE ||
> +		end2 > PAGE_SIZE);
> +
> +	if (end1 > start1)
> +		memset(kaddr + start1, 0, end1 - start1);
> +
> +	if (end2 > start2)
> +		memset(kaddr + start2, 0, end2 - start2);
> +
> +	flush_dcache_page(page);
> +	kunmap_atomic(kaddr, KM_USER0);
> +}

For some reason we've always done the flush_dcache_page() while holding the
kmap.  There's no reason for doing it this way and it just serves to worsen
scheduling latency a tiny bit.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ