lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190723151431.GA8419@localhost.localdomain>
Date:   Tue, 23 Jul 2019 11:14:31 -0400
From:   Neil Horman <nhorman@...driver.com>
To:     Ido Schimmel <idosch@...sch.org>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, dsahern@...il.com,
        roopa@...ulusnetworks.com, nikolay@...ulusnetworks.com,
        jakub.kicinski@...ronome.com, toke@...hat.com, andy@...yhouse.net,
        f.fainelli@...il.com, andrew@...n.ch, vivien.didelot@...il.com,
        mlxsw@...lanox.com, Ido Schimmel <idosch@...lanox.com>
Subject: Re: [RFC PATCH net-next 10/12] drop_monitor: Add packet alert mode

On Tue, Jul 23, 2019 at 05:16:25PM +0300, Ido Schimmel wrote:
> On Tue, Jul 23, 2019 at 08:43:40AM -0400, Neil Horman wrote:
> > On Mon, Jul 22, 2019 at 09:31:32PM +0300, Ido Schimmel wrote:
> > > +static void net_dm_packet_work(struct work_struct *work)
> > > +{
> > > +	struct per_cpu_dm_data *data;
> > > +	struct sk_buff_head list;
> > > +	struct sk_buff *skb;
> > > +	unsigned long flags;
> > > +
> > > +	data = container_of(work, struct per_cpu_dm_data, dm_alert_work);
> > > +
> > > +	__skb_queue_head_init(&list);
> > > +
> > > +	spin_lock_irqsave(&data->drop_queue.lock, flags);
> > > +	skb_queue_splice_tail_init(&data->drop_queue, &list);
> > > +	spin_unlock_irqrestore(&data->drop_queue.lock, flags);
> > > +
> > These functions are all executed in a per-cpu context.  While theres nothing
> > wrong with using a spinlock here, I think you can get away with just doing
> > local_irqsave and local_irq_restore.
> 
> Hi Neil,
> 
> Thanks a lot for reviewing. I might be missing something, but please
> note that this function is executed from a workqueue and therefore the
> CPU it is running on does not have to be the same CPU to which 'data'
> belongs to. If so, I'm not sure how I can avoid taking the spinlock, as
> otherwise two different CPUs can modify the list concurrently.
> 
Ah, my bad, I was under the impression that the schedule_work call for
that particular work queue was actually a call to schedule_work_on,
which would have affined it to a specific cpu.  That said, looking at
it, I think using schedule_work_on was my initial intent, as the work
queue is registered per cpu.  And converting it to schedule_work_on
would allow you to reduce the spin_lock to a faster local_irqsave

Otherwise though, this looks really good to me
Neil

> > 
> > Neil
> > 
> > > +	while ((skb = __skb_dequeue(&list)))
> > > +		net_dm_packet_report(skb);
> > > +}
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ