[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190723141625.GA8972@splinter>
Date: Tue, 23 Jul 2019 17:16:25 +0300
From: Ido Schimmel <idosch@...sch.org>
To: Neil Horman <nhorman@...driver.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, dsahern@...il.com,
roopa@...ulusnetworks.com, nikolay@...ulusnetworks.com,
jakub.kicinski@...ronome.com, toke@...hat.com, andy@...yhouse.net,
f.fainelli@...il.com, andrew@...n.ch, vivien.didelot@...il.com,
mlxsw@...lanox.com, Ido Schimmel <idosch@...lanox.com>
Subject: Re: [RFC PATCH net-next 10/12] drop_monitor: Add packet alert mode
On Tue, Jul 23, 2019 at 08:43:40AM -0400, Neil Horman wrote:
> On Mon, Jul 22, 2019 at 09:31:32PM +0300, Ido Schimmel wrote:
> > +static void net_dm_packet_work(struct work_struct *work)
> > +{
> > + struct per_cpu_dm_data *data;
> > + struct sk_buff_head list;
> > + struct sk_buff *skb;
> > + unsigned long flags;
> > +
> > + data = container_of(work, struct per_cpu_dm_data, dm_alert_work);
> > +
> > + __skb_queue_head_init(&list);
> > +
> > + spin_lock_irqsave(&data->drop_queue.lock, flags);
> > + skb_queue_splice_tail_init(&data->drop_queue, &list);
> > + spin_unlock_irqrestore(&data->drop_queue.lock, flags);
> > +
> These functions are all executed in a per-cpu context. While theres nothing
> wrong with using a spinlock here, I think you can get away with just doing
> local_irqsave and local_irq_restore.
Hi Neil,
Thanks a lot for reviewing. I might be missing something, but please
note that this function is executed from a workqueue and therefore the
CPU it is running on does not have to be the same CPU to which 'data'
belongs to. If so, I'm not sure how I can avoid taking the spinlock, as
otherwise two different CPUs can modify the list concurrently.
>
> Neil
>
> > + while ((skb = __skb_dequeue(&list)))
> > + net_dm_packet_report(skb);
> > +}
Powered by blists - more mailing lists