lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171213165155.GA6003@kroah.com>
Date:   Wed, 13 Dec 2017 17:51:55 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     Boris Brezillon <boris.brezillon@...e-electrons.com>
Cc:     Wolfram Sang <wsa@...-dreams.de>, linux-i2c@...r.kernel.org,
        Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
        Arnd Bergmann <arnd@...db.de>,
        Przemyslaw Sroka <psroka@...ence.com>,
        Arkadiusz Golec <agolec@...ence.com>,
        Alan Douglas <adouglas@...ence.com>,
        Bartosz Folta <bfolta@...ence.com>,
        Damian Kos <dkos@...ence.com>,
        Alicja Jurasik-Urbaniak <alicja@...ence.com>,
        Jan Kotas <jank@...ence.com>,
        Cyprian Wronka <cwronka@...ence.com>,
        Alexandre Belloni <alexandre.belloni@...e-electrons.com>,
        Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
        Nishanth Menon <nm@...com>, Rob Herring <robh+dt@...nel.org>,
        Pawel Moll <pawel.moll@....com>,
        Mark Rutland <mark.rutland@....com>,
        Ian Campbell <ijc+devicetree@...lion.org.uk>,
        Kumar Gala <galak@...eaurora.org>, devicetree@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC 2/5] i3c: Add core I3C infrastructure

On Wed, Dec 13, 2017 at 05:20:43PM +0100, Boris Brezillon wrote:
> Hi Greg,
> 
> On Tue, 1 Aug 2017 19:13:27 -0700
> Greg Kroah-Hartman <gregkh@...uxfoundation.org> wrote:
> 
> > > > > Unless you see a good reason to not use a R/W lock, I'd like to keep it
> > > > > this way because master IPs are likely to implement advanced queuing
> > > > > mechanism (allows one to queue new transfers even if the master is
> > > > > already busy processing other requests), and serializing things at the
> > > > > framework level will just prevent us from using this kind of
> > > > > optimization.    
> > > > 
> > > > Unless you can prove otherwise, using a rw lock is almost always worse
> > > > than just a mutex.  
> > > 
> > > Is it still true when it's taken in non-exclusive mode most of the
> > > time, and the time you spend in the critical section is non-negligible?
> > > 
> > > I won't pretend I know better than you do what is preferable, it's just
> > > that the RW lock seemed appropriate to me for the situation I tried to
> > > described here.  
> > 
> > Again, measure it.  If you can't measure it, then don't use it.  Use a
> > simple lock instead.  Seriously, don't make it more complex until you
> > really have to.  It sounds like you didn't measure it at all, which
> > isn't good, please do so.
> > 
> 
> I'm resurrecting this thread because I finally had the time to implement
> message queuing in Cadence I3C master driver. So I did a test with 2
> I3C devices on the bus, and their drivers sending as much SDR messages
> as they can in 10s. Here are the results:
> 
>           |    mutex    |    rwsem    |
> ---------------------------------------
> dev1      |    19087    |    29532    |
> dev2      |    19341    |    29118    |
> =======================================
> total     |    38428    |    58650    |
> msg/sec	  |    ~3843    |    ~5865    |
> 
> 
> The results I'm obtaining here are not so surprising since all normal
> transfers are taking the lock in read mode, so there's no contention.
> I didn't measure the impact on performances when there's one
> maintenance operation taking the lock in write mode and several normal
> transfers waiting for this lock, but really, maintenance operations are
> infrequent, and that's not where performance matters in our use case.
> 
> I also did the same test with only one device doing transfers on the
> bus, and this time the mutex wins, but there's not a huge difference.
> 
>           |    mutex    |    rwsem    |
> ---------------------------------------
> total     |    67116    |    66561    |
> msg/sec	  |    ~6712    |    ~6656    |
> 
> Let me know if you want more information on the test procedure.

Nice, thanks for testing, so it is a real win here, good!

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ