[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201119111343.74956eae@monster.powergraphx.local>
Date: Thu, 19 Nov 2020 11:13:43 +0100
From: Wilken Gottwalt <wilken.gottwalt@...teo.net>
To: Maxime Ripard <maxime@...no.tech>
Cc: linux-kernel@...r.kernel.org, Ohad Ben-Cohen <ohad@...ery.com>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Baolin Wang <baolin.wang7@...il.com>,
Rob Herring <robh+dt@...nel.org>, Chen-Yu Tsai <wens@...e.org>,
Jernej Skrabec <jernej.skrabec@...l.net>
Subject: Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
On Thu, 19 Nov 2020 08:15:23 +0100
Maxime Ripard <maxime@...no.tech> wrote:
> > can you help me here a bit? I still try to figure out how to do patch sets
> > properly. Some kernel submitting documentation says everything goes into the
> > coverletter and other documentation only tells how to split the patches. So
> > what would be the right way? A quick example based on my patch set would be
> > really helpful.
>
> I mean, the split between your patches and so on is good, you got that right
>
> The thing I wanted better details on is the commit log itself, so the
> message attached to that patch.
Ah yes, I think I got it now. So basically add a nice summary of the coverletter
there.
> > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > and the ARISC. How did you test this driver?
> >
> > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > module for this. But I must admit, testing the ARISC is very hard and I have
> > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > with my test kernel module, but I'm not sure if this is really sufficient. I
> > can provide the code for it if you like. What would be the best way? Github?
> > Just mailing a patch?
> >
> > The test module produces these results:
> >
> > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko
> > [ 45.395672] [init] sunxi hwspinlock test driver start
> > [ 45.400775] [init] start test locks
> > [ 45.404263] [run ] testing 32 locks
> > [ 45.407804] [test] testing lock 0 -----
> > [ 45.411652] [test] taking lock attempt #0 succeded
> > [ 45.416438] [test] try taken lock attempt #0
> > [ 45.420735] [test] unlock/take attempt #0
> > [ 45.424752] [test] taking lock attempt #1 succeded
> > [ 45.429556] [test] try taken lock attempt #1
> > [ 45.433823] [test] unlock/take attempt #1
> > [ 45.437862] [test] testing lock 1 -----
>
> That doesn't really test for contention though, and dealing with
> contention is mostly what this hardware is about. Could you make a small
> test with crust to see if when the arisc has taken the lock, the ARM
> cores can't take it?
So the best solution would be to write a bare metal program that runs on the
arisc and can be triggered from the linux side (the test kernel module) to take
a spinlock ... or at least take spinlocks periodically for a while and watch it
on the linux side. Okay, I think I can do this. Though, I have to dig through
all this new stuff first.
Greetings,
Wilken
Powered by blists - more mailing lists