lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140916002740.GA5671@zen>
Date:	Tue, 16 Sep 2014 08:27:40 +0800
From:	Amos Kong <akong@...hat.com>
To:	Michael Büsch <m@...s.ch>
Cc:	virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
	herbert@...dor.apana.org.au, mpm@...enic.com,
	rusty@...tcorp.com.au, amit.shah@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] hw_random: increase schedule timeout in
 rng_dev_read()

On Mon, Sep 15, 2014 at 06:13:31PM +0200, Michael Büsch wrote:
> On Tue, 16 Sep 2014 00:02:29 +0800
> Amos Kong <akong@...hat.com> wrote:
> 
> > This patch increases the schedule timeout to 10 jiffies, it's more
> > appropriate, then other takes can easy to hold the mutex lock.
> > 
> > Signed-off-by: Amos Kong <akong@...hat.com>
> > ---
> >  drivers/char/hw_random/core.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
> > index 263a370..b5d1b6f 100644
> > --- a/drivers/char/hw_random/core.c
> > +++ b/drivers/char/hw_random/core.c
> > @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> >  
> >  		mutex_unlock(&rng_mutex);
> >  
> > -		schedule_timeout_interruptible(1);
> > +		schedule_timeout_interruptible(10);
> >  
> >  		if (signal_pending(current)) {
> >  			err = -ERESTARTSYS;
> 
> Does a schedule of 1 ms or 10 ms decrease the throughput?

In my test environment, 1 jiffe always works (100%), as suggested by
Amit 10 jiffes is more appropriate.

After applied current 3 patches, there is a throughput regression.

  1.2 M/s -> 6 K/s

We can only schedule in the end of loop (size == 0), and only for
non-smp guest. So smp guest won't be effected.

|               if (!size && num_online_cpus() == 1)
|                       schedule_timeout_interruptible(timeout);


Set timeout to 1:
  non-smp guest with quick backend (1.2M/s) -> about 49K/s)

Set timeout to 10:
  non-smp guest with quick backend (1.2M/s) -> about 490K/s)

We might need other benchmark to test the performance, but we can
see the bug clearly caused a regression.

As we discussed in other thread, need_resched() should work in this
case, so those patches might be wrong fixing.

> I think we need some benchmarks.
> 
> -- 
> Michael



-- 
			Amos.

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ