lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20130129090126.GA5547@gmail.com>
Date:	Tue, 29 Jan 2013 10:01:26 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	nan chen <nachenn@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Yuanhan Liu <yuanhan.liu@...ux.intel.com>,
	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 2/2] mutex: use spin_[un]lock instead of
 arch_spin_[un]lock


* nan chen <nachenn@...il.com> wrote:

> 2013/1/25 Ingo Molnar <mingo@...nel.org>
> 
> >
> > * nan chen <nachenn@...il.com> wrote:
> >
> > > 2013/1/25 Ingo Molnar <mingo@...nel.org>
> > >
> > > >
> > > > * Andrew Morton <akpm@...ux-foundation.org> wrote:
> > > >
> > > > > On Thu, 24 Jan 2013 17:22:45 +0800
> > > > > Yuanhan Liu <yuanhan.liu@...ux.intel.com> wrote:
> > > > >
> > > > > > Use spin_[un]lock instead of arch_spin_[un]lock in mutex-debug.h so
> > > > > > that we can collect the lock statistics of spin_lock_mutex from
> > > > > > /proc/lock_stat.
> > > >
> > > > So, as per the discussion we don't want this patch, because we
> > > > are using raw locks there to keep mutex lockdep overhead low.
> > > > The value of lockdep-checking such a basic locking primitive is
> > > > minimal - it's rarely tweaked and if it breaks we won't have a
> > > > bootable kernel to begin with.
> > > >
> > > > So instead I suggested a different patch: adding a comment to
> > > > explain why we don't lockdep-cover the mutex code spinlocks.
> > > >
> > > > > Also, I believe your patch permits this cleanup:
> > > > >
> > > > > ---
> > > >
> > a/kernel/mutex-debug.h~mutex-use-spin_lock-instead-of-arch_spin_lock-fix
> > > > > +++ a/kernel/mutex-debug.h
> > > > > @@ -42,14 +42,12 @@ static inline void mutex_clear_owner(str
> > > > >               struct mutex *l = container_of(lock, struct mutex,
> > > > wait_lock); \
> > > > >                                                       \
> > > > >               DEBUG_LOCKS_WARN_ON(in_interrupt());    \
> > > > > -             local_irq_save(flags);                  \
> > > > > -             spin_lock(lock);                        \
> > > > > +             spin_lock_irqsave(lock, flags);         \
> > > >
> > > > Yes, I mentioned that yesterday, but we really don't want the
> > > > change to begin with.
> > > >
> > > > Thanks,
> > > >
> > > >         Ingo
> > > >
> > >
> > > Hi,
> > >
> > > Looks like in mutex.h, it does not disable local interrupt.
> > > But why the code disable local interrupt in mutex-debug.h?
> >
> > To protect against preemption I suspect. preempt_disable() could
> > be used in the mutex-debug.h variant I suppose.
> >
> > Thanks,
> >
> >         Ingo
> >
> 
> 
> spin_lock() itself already protects against preemption.

Yes, but mutex-debug.h does not use spin_lock(), it uses 
arch_spin_lock():

#define spin_lock_mutex(lock, flags)                    \
        do {                                            \
                struct mutex *l = container_of(lock, struct mutex, wait_lock); \
                                                        \
                DEBUG_LOCKS_WARN_ON(in_interrupt());    \
                local_irq_save(flags);                  \
                arch_spin_lock(&(lock)->rlock.raw_lock);\
                DEBUG_LOCKS_WARN_ON(l->magic != l);     \
        } while (0)

The original question was why mutex-debug.h (unmodified) uses 
irq disabling.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ