[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140915181331.4e3f5fed@wiggum>
Date: Mon, 15 Sep 2014 18:13:31 +0200
From: Michael Büsch <m@...s.ch>
To: Amos Kong <akong@...hat.com>
Cc: virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
herbert@...dor.apana.org.au, mpm@...enic.com,
rusty@...tcorp.com.au, amit.shah@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] hw_random: increase schedule timeout in
rng_dev_read()
On Tue, 16 Sep 2014 00:02:29 +0800
Amos Kong <akong@...hat.com> wrote:
> This patch increases the schedule timeout to 10 jiffies, it's more
> appropriate, then other takes can easy to hold the mutex lock.
>
> Signed-off-by: Amos Kong <akong@...hat.com>
> ---
> drivers/char/hw_random/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
> index 263a370..b5d1b6f 100644
> --- a/drivers/char/hw_random/core.c
> +++ b/drivers/char/hw_random/core.c
> @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
>
> mutex_unlock(&rng_mutex);
>
> - schedule_timeout_interruptible(1);
> + schedule_timeout_interruptible(10);
>
> if (signal_pending(current)) {
> err = -ERESTARTSYS;
Does a schedule of 1 ms or 10 ms decrease the throughput?
I think we need some benchmarks.
--
Michael
Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)
Powered by blists - more mailing lists