[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171107210748.GR3165@worktop.lehotels.local>
Date: Tue, 7 Nov 2017 22:07:48 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Jeremy Linton <jeremy.linton@....com>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com,
paulmck@...ux.vnet.ibm.com, dave@...olabs.net
Subject: Re: [PATCH] locktorture: Fix Oops when reader/writer count is 0
On Tue, Nov 07, 2017 at 02:01:58PM -0600, Jeremy Linton wrote:
> Hi,
>
> On 10/10/2017 10:52 AM, Jeremy Linton wrote:
> >If nwriters_stress=0 is passed to the lock torture test
> >it will panic in:
>
> Ping?
>
> Has anyone had a chance to look at this?
Helps if you Cc the people actually working on this stuff of course...
>
> >
> >Internal error: Oops: 96000005 [#1] SMP
> >...
> >[<ffff000000b7022c>] __torture_print_stats+0x2c/0x1c8 [locktorture]
> >[<ffff000000b7070c>] lock_torture_stats_print+0x74/0x120 [locktorture]
> >[<ffff000000b707f8>] lock_torture_stats+0x40/0xa8 [locktorture]
> >[<ffff0000080f3570>] kthread+0x108/0x138
> >[<ffff000008084b90>] ret_from_fork+0x10/0x18
> >
> >This is caused by the deference to a null statp. Fix that by
> >checking the n_stress for non zero count before referencing statp.
> >
> >Signed-off-by: Jeremy Linton <jeremy.linton@....com>
> >---
> > kernel/locking/locktorture.c | 6 +++++-
> > 1 file changed, 5 insertions(+), 1 deletion(-)
> >
> >diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> >index f24582d4dad3..8229ba7147e5 100644
> >--- a/kernel/locking/locktorture.c
> >+++ b/kernel/locking/locktorture.c
> >@@ -716,10 +716,14 @@ static void __torture_print_stats(char *page,
> > bool fail = 0;
> > int i, n_stress;
> > long max = 0;
> >- long min = statp[0].n_lock_acquired;
> >+ long min = 0;
> > long long sum = 0;
> > n_stress = write ? cxt.nrealwriters_stress : cxt.nrealreaders_stress;
> >+
> >+ if (n_stress)
> >+ min = statp[0].n_lock_acquired;
> >+
> > for (i = 0; i < n_stress; i++) {
> > if (statp[i].n_lock_fail)
> > fail = true;
> >
>
Powered by blists - more mailing lists