[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200127071128.GA279449@kroah.com>
Date: Mon, 27 Jan 2020 08:11:28 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Cc: jhugo@...eaurora.org, arnd@...db.de, smohanad@...eaurora.org,
kvalo@...eaurora.org, bjorn.andersson@...aro.org,
hemantk@...eaurora.org, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/16] bus: mhi: core: Add support for registering MHI
controllers
On Mon, Jan 27, 2020 at 12:32:52PM +0530, Manivannan Sadhasivam wrote:
> > > > + void __iomem *regs;
> > > > + dma_addr_t iova_start;
> > > > + dma_addr_t iova_stop;
> > > > + const char *fw_image;
> > > > + const char *edl_image;
> > > > + bool fbc_download;
> > > > + size_t sbl_size;
> > > > + size_t seg_len;
> > > > + u32 max_chan;
> > > > + struct mhi_chan *mhi_chan;
> > > > + struct list_head lpm_chans;
> > > > + u32 total_ev_rings;
> > > > + u32 hw_ev_rings;
> > > > + u32 sw_ev_rings;
> > > > + u32 nr_irqs_req;
> > > > + u32 nr_irqs;
> > > > + int *irq;
> > > > +
> > > > + struct mhi_event *mhi_event;
> > > > + struct mhi_cmd *mhi_cmd;
> > > > + struct mhi_ctxt *mhi_ctxt;
> > > > +
> > > > + u32 timeout_ms;
> > > > + struct mutex pm_mutex;
> > > > + bool pre_init;
> > > > + rwlock_t pm_lock;
> > > > + u32 pm_state;
> > > > + u32 db_access;
> > > > + enum mhi_ee_type ee;
> > > > + bool wake_set;
> > > > + atomic_t dev_wake;
> > > > + atomic_t alloc_size;
> > > > + atomic_t pending_pkts;
> > >
> > > Why a bunch of atomic variables when you already have a lock?
> > >
>
> So there are multiple locks used throughout the MHI stack and each one
> servers its own purpose. For instance, pm_lock protects againt the
> concurrent access to the PM state, transition_lock protects against the
> concurrent access to the state transition list, wlock protects against
> the concurrent access to device wake state. Since there are multiple
> worker threads and each trying to update these variables, we did the
> best to protect against the race condition by having all these locks.
>
> And there are these atomic variables which are again shared with the
> threads holding the above locks, precisely with threads holding read locks.
> So it becomes convenient to just use the atomic_ APIs to update these variables.
An atomic_ api is almost as heavy as a "normal" lock, so while you might
think it is convenient, it's odd that you feel it is needed. As an
example, "wake_set" and "dev_wake" look like they are happening at the
same time, yet one is going to be held with a lock and the other one
updated without one?
Anyway, I'll leave this alone, let's see what your next round looks
like...
greg k-h
Powered by blists - more mailing lists