[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52D3CBA5.4080301@redhat.com>
Date: Mon, 13 Jan 2014 12:19:01 +0100
From: Daniel Borkmann <dborkman@...hat.com>
To: Cong Wang <cwang@...pensource.com>
CC: David Miller <davem@...emloft.net>, netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 3/3] packet: use percpu mmap tx frame pending
refcount
On 01/13/2014 06:51 AM, Cong Wang wrote:
> On Sun, Jan 12, 2014 at 8:22 AM, Daniel Borkmann <dborkman@...hat.com> wrote:
>> +static void packet_inc_pending(struct packet_ring_buffer *rb)
>> +{
>> + this_cpu_inc(*rb->pending_refcnt);
>> +}
>> +
>> +static void packet_dec_pending(struct packet_ring_buffer *rb)
>> +{
>> + this_cpu_dec(*rb->pending_refcnt);
>> +}
>> +
>> +static int packet_read_pending(const struct packet_ring_buffer *rb)
>> +{
>> + int i, refcnt = 0;
>> +
>> + /* We don't use pending refcount in rx_ring. */
>> + if (rb->pending_refcnt == NULL)
>> + return 0;
>> +
>> + for_each_possible_cpu(i)
>> + refcnt += *per_cpu_ptr(rb->pending_refcnt, i);
>> +
>> + return refcnt;
>> +}
>
> How is this supposed to work? Since there is no lock,
> you can't read accurate refcnt. Take a look at lib/percpu_counter.c.
>
> I guess for some reason you don't care the accuracy?
Yep, not per se. Look at how we do net device reference counting.
The reason is that we call packet_read_pending() *only* after we
finished processing all frames in TX_RING and we wait for
completion in case MSG_DONTWAIT is *not set*, when that happens
we're back to 0.
But I think I found a different problem with this idea. It could
happen with net devices as well, but probably less likely as there
might be a better distribution of hold/puts among CPUs. However,
for TX_RING, if we pin the process to a particular CPU, and since
the destructor is invoked through ksoftirqd, we could end up with
a misbalance and if the process runs long enough eventually
overflow for one particular CPU. We could work around that, but I
think it's not worth the effort.
Dave, please drop the 3rd patch of the series, thanks.
> Then at least you need to comment in the code.
>
> Thanks.
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists