lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130502132840.GA27322@redhat.com>
Date:	Thu, 2 May 2013 16:28:40 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	Fenghua Yu <fenghua.yu@...el.com>
Subject: Re: [PATCH RFC] x86: uaccess s/might_sleep/might_fault/

On Thu, May 02, 2013 at 10:52:41AM +0200, Ingo Molnar wrote:
> 
> * Michael S. Tsirkin <mst@...hat.com> wrote:
> 
> > The only reason uaccess routines might sleep
> > is if they fault. Make this explicit for
> > __copy_from_user_nocache, and consistent with
> > copy_from_user and friends.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> > ---
> > 
> > I've updated all other arches as well - still
> > build-testing. Any objections to the x86 patch?
> > 
> >  arch/x86/include/asm/uaccess_64.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
> > index 142810c..4f7923d 100644
> > --- a/arch/x86/include/asm/uaccess_64.h
> > +++ b/arch/x86/include/asm/uaccess_64.h
> > @@ -235,7 +235,7 @@ extern long __copy_user_nocache(void *dst, const void __user *src,
> >  static inline int
> >  __copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
> >  {
> > -	might_sleep();
> > +	might_fault();
> >  	return __copy_user_nocache(dst, src, size, 1);
> 
> Looks good to me:
> 
> Acked-by: Ingo Molnar <mingo@...nel.org>
> 
> 
> ... but while reviewing the effects I noticed a bug in might_fault():
> 
> #ifdef CONFIG_PROVE_LOCKING
> void might_fault(void)
> {
>         /*
>          * Some code (nfs/sunrpc) uses socket ops on kernel memory while
>          * holding the mmap_sem, this is safe because kernel memory doesn't
>          * get paged out, therefore we'll never actually fault, and the
>          * below annotations will generate false positives.
>          */
>         if (segment_eq(get_fs(), KERNEL_DS))
>                 return;
> 
>         might_sleep();
> 
> the might_sleep() call should come first. With the current code 
> might_fault() schedules differently depending on CONFIG_PROVE_LOCKING, 
> which is an undesired semantical side effect ...
> 
> So please fix that too while at it.
> 
> Thanks,
> 
> 	Ingo


OK. And there's another bug that I'd like to fix:
if caller does pagefault_disable, pagefaults don't
actually sleep: the page fault handler will detect we are in
tomic context and go directly to fixups instead of
processing the page fault.

So calling anything that faults in atomic context is
ok, and it should be

	if (pagefault_disabled())
		might_sleep();

Except we don't have pagefault_disabled(), and
we still want to catch the calls within preempt_disable
sections (as these can be compiled out), so
I plan to add a per-cpu flag (only if CONFIG_DEBUG_ATOMIC_SLEEP
is set) to distinguish between preempt_disable
and pagefault_disable.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ