lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 29 Mar 2017 19:01:09 +0100 From: Sudeep Holla <sudeep.holla@....com> To: Jassi Brar <jassisinghbrar@...il.com>, linux-kernel@...r.kernel.org Cc: Sudeep Holla <sudeep.holla@....com>, alexey.klimov@....com, Jassi Brar <jaswinder.singh@...aro.org> Subject: Re: [PATCH] mailbox: fix completion order for blocking requests On 29/03/17 18:43, Jassi Brar wrote: > Currently two threads, wait on blocking requests, could wake up for > completion of request of each other as ... > > Thread#1(T1) Thread#2(T2) > mbox_send_message mbox_send_message > | | > V | > add_to_rbuf(M1) V > | add_to_rbuf(M2) > | | > | V > V msg_submit(picks M1) > msg_submit | > | V > V wait_for_completion(on M2) > wait_for_completion(on M1) | (1st in waitQ) > | (2nd in waitQ) V > V wake_up(on completion of M1)<--incorrect > > Fix this situaion by assigning completion structures to each queued > request, so that the threads could wait on their own completions. > Alexey came up with exact similar solution. I didn't like: 1. the static array just bloats the structure with equal no. of completion which may be useless for !TXDONE_BY_POLL 2. We have client drivers already doing something similar. I wanted to fix/move those along with this fix. Or at-least see the feasibiliy > Reported-by: Alexey Klimov <alexey.klimov@....com> > Signed-off-by: Jassi Brar <jaswinder.singh@...aro.org> > --- > drivers/mailbox/mailbox.c | 15 +++++++++++---- > drivers/mailbox/omap-mailbox.c | 2 +- > drivers/mailbox/pcc.c | 2 +- > include/linux/mailbox_controller.h | 6 ++++-- > 4 files changed, 17 insertions(+), 8 deletions(-) > > diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c > index 9dfbf7e..e06c50c 100644 > --- a/drivers/mailbox/mailbox.c > +++ b/drivers/mailbox/mailbox.c > @@ -41,6 +41,7 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg) > > idx = chan->msg_free; > chan->msg_data[idx] = mssg; > + init_completion(&chan->tx_cmpl[idx]); reinit would be better. > chan->msg_count++; > > if (idx == MBOX_TX_QUEUE_LEN - 1) > @@ -73,6 +74,7 @@ static void msg_submit(struct mbox_chan *chan) > idx += MBOX_TX_QUEUE_LEN - count; > > data = chan->msg_data[idx]; > + chan->tx_complete = &chan->tx_cmpl[idx]; > > if (chan->cl->tx_prepare) > chan->cl->tx_prepare(chan->cl, data); > @@ -81,7 +83,8 @@ static void msg_submit(struct mbox_chan *chan) > if (!err) { > chan->active_req = data; > chan->msg_count--; > - } > + } else > + chan->tx_complete = NULL; > exit: > spin_unlock_irqrestore(&chan->lock, flags); > > @@ -92,12 +95,15 @@ static void msg_submit(struct mbox_chan *chan) > > static void tx_tick(struct mbox_chan *chan, int r) > { > + struct completion *tx_complete; > unsigned long flags; > void *mssg; > > spin_lock_irqsave(&chan->lock, flags); > mssg = chan->active_req; > + tx_complete = chan->tx_complete; > chan->active_req = NULL; > + chan->tx_complete = NULL; > spin_unlock_irqrestore(&chan->lock, flags); > > /* Submit next message */ > @@ -111,7 +117,7 @@ static void tx_tick(struct mbox_chan *chan, int r) > chan->cl->tx_done(chan->cl, mssg, r); > > if (r != -ETIME && chan->cl->tx_block) > - complete(&chan->tx_complete); > + complete(tx_complete); > } > > static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer) > @@ -272,7 +278,7 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg) > else > wait = msecs_to_jiffies(chan->cl->tx_tout); > > - ret = wait_for_completion_timeout(&chan->tx_complete, wait); > + ret = wait_for_completion_timeout(&chan->tx_cmpl[t], wait); > if (ret == 0) { > t = -ETIME; > tx_tick(chan, t); > @@ -348,7 +354,7 @@ struct mbox_chan *mbox_request_channel(struct mbox_client *cl, int index) > chan->msg_count = 0; > chan->active_req = NULL; > chan->cl = cl; > - init_completion(&chan->tx_complete); > + chan->tx_complete = NULL; > > if (chan->txdone_method == TXDONE_BY_POLL && cl->knows_txdone) > chan->txdone_method |= TXDONE_BY_ACK; > @@ -414,6 +420,7 @@ void mbox_free_channel(struct mbox_chan *chan) > spin_lock_irqsave(&chan->lock, flags); > chan->cl = NULL; > chan->active_req = NULL; > + chan->tx_complete = NULL; > if (chan->txdone_method == (TXDONE_BY_POLL | TXDONE_BY_ACK)) > chan->txdone_method = TXDONE_BY_POLL; > > diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c > index c5e8b9c..99b0841 100644 > --- a/drivers/mailbox/omap-mailbox.c > +++ b/drivers/mailbox/omap-mailbox.c > @@ -449,7 +449,7 @@ struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl, > chan->msg_count = 0; > chan->active_req = NULL; > chan->cl = cl; > - init_completion(&chan->tx_complete); > + chan->tx_complete = NULL; > spin_unlock_irqrestore(&chan->lock, flags); > > ret = chan->mbox->ops->startup(chan); > diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c > index dd9ecd35..b26cc9c 100644 > --- a/drivers/mailbox/pcc.c > +++ b/drivers/mailbox/pcc.c > @@ -263,7 +263,7 @@ struct mbox_chan *pcc_mbox_request_channel(struct mbox_client *cl, > chan->msg_count = 0; > chan->active_req = NULL; > chan->cl = cl; > - init_completion(&chan->tx_complete); > + chan->tx_complete = NULL; > > if (chan->txdone_method == TXDONE_BY_POLL && cl->knows_txdone) > chan->txdone_method |= TXDONE_BY_ACK; > diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h > index 74deadb..aac8659 100644 > --- a/include/linux/mailbox_controller.h > +++ b/include/linux/mailbox_controller.h > @@ -106,11 +106,12 @@ struct mbox_controller { > * @mbox: Pointer to the parent/provider of this channel > * @txdone_method: Way to detect TXDone chosen by the API > * @cl: Pointer to the current owner of this channel > - * @tx_complete: Transmission completion > + * @tx_complete: Pointer to current transmission completion > * @active_req: Currently active request hook > * @msg_count: No. of mssg currently queued > * @msg_free: Index of next available mssg slot > * @msg_data: Hook for data packet > + * @tx_cmpl: Per-message completion structure > * @lock: Serialise access to the channel > * @con_priv: Hook for controller driver to attach private data > */ > @@ -118,10 +119,11 @@ struct mbox_chan { > struct mbox_controller *mbox; > unsigned txdone_method; > struct mbox_client *cl; > - struct completion tx_complete; > + struct completion *tx_complete; > void *active_req; > unsigned msg_count, msg_free; > void *msg_data[MBOX_TX_QUEUE_LEN]; > + struct completion tx_cmpl[MBOX_TX_QUEUE_LEN]; May be better to encapsulate msg_data and tx_cmpl into structure so that we just have one pointer to active_req and need not track corresponding *tx_complete -- Regards, Sudeep
Powered by blists - more mailing lists