[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=UZTyfVN=a35hiXxdNSvdhfK60HewP_p_VvH1KdoUa1ww@mail.gmail.com>
Date: Fri, 27 Mar 2020 11:42:03 -0700
From: Doug Anderson <dianders@...omium.org>
To: Maulik Shah <mkshah@...eaurora.org>
Cc: Stephen Boyd <swboyd@...omium.org>,
Evan Green <evgreen@...omium.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Andy Gross <agross@...nel.org>,
Matthias Kaehlcke <mka@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v14 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS
for active request
Hi,
On Fri, Mar 27, 2020 at 5:04 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>
> > Why can't rpmh_write()
> > / rpmh_write_async() / rpmh_write_batch() just always unconditionally
> > mark the cache dirty? Are there really lots of cases when those calls
> > are made and they do nothing?
>
> At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.
>
> We should not blindly mark caches dirty everytime.
In message ID "5a5274ac-41f4-b06d-ff49-c00cef67aa7f@...eaurora.org"
which seems to be missing from the archives, you said:
> yes we should trust callers not to send duplicate data
...you can see some reference to it in my reply:
https://lore.kernel.org/r/CAD=FV=VPSahhK71k_D+nfL1=5QE5sKMQT=6zzyEF7+JWMcTxsg@mail.gmail.com/
If callers are trusted to never send duplicate data then ever call to
rpmh_write() will make a change. ...and thus the cache should always
be marked dirty, no? Also note that since rpmh_write() to "active"
also counts as a write to "wake" even those will dirty the cache.
Which case are you expecting a rpmh_write() call to not dirty the cache?
> > ...interestingly after your patch I guess now I guess tcs_invalidate()
> > no longer needs spinlocks since it's only ever called from PM code on
> > the last CPU. ...if you agree, I can always do it in my cleanup
> > series. See:
> >
> > https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com
> >
> > -Doug
>
> There are other RSCs which use same driver, so lets keep spinlock.
It is really hard to try to write keeping in mind these "other RSCs"
for which there is no code upstream. IMO we should write the code
keeping in mind what is supported upstream and then when those "other
RSCs" get added we can evaluate their needs.
Specifically when reasoning about rpmh.c and rpmh-rsc.c I can only
look at the code that's there now and decide whether it is race free
or there are races. Back when I was analyzing the proposal to do
rpmh_flush() all the time (not from PM code) it felt like there were a
bunch of races, especially in the zero-active-TCS case. Most of the
races go away when you assume that rpmh_flush() is only ever called
from the PM code when nobody could be in the middle of an active
transfer.
If we are ever planning to call rpmh_flush() from another place we
need to re-look at all those races.
-Doug
Powered by blists - more mailing lists