lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130316220216.GA25099@dcvr.yhbt.net>
Date:	Sat, 16 Mar 2013 22:02:16 +0000
From:	Eric Wong <normalperson@...t.net>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Lai Jiangshan <laijs@...fujitsu.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Stephen Hemminger <shemminger@...tta.com>,
	Davide Libenzi <davidel@...ilserver.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] Linux kernel Wait-Free Concurrent Queue
 Implementation

Eric Wong <normalperson@...t.net> wrote:
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> > * Eric Wong (normalperson@...t.net) wrote:
> > > Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> > > > +/*
> > > > + * Load a data from shared memory.
> > > > + */
> > > > +#define CMM_LOAD_SHARED(p)		ACCESS_ONCE(p)
> > > 
> > > When iterating through the queue by dequeueing, I needed a way
> > > to get the tail at the start of the iteration and use that as
> > > a sentinel while iterating, so I access the tail like this:
> > > 
> > > 	struct wfcq_node *p = CMM_LOAD_SHARED(ep->rdltail.p);
> > > 
> > > I hope this is supported... it seems to work :)
> > 
> > Ideally it would be good if users could try using the exposed APIs to do
> > these things, or if it's really needed, maybe it's a sign that we need
> > to extend the API.
> 
> Right.  If I can use splice, I will not need this.  more comments below
> on splice...

Even with splice, I think I need to see the main tail at the start of
iteration to maintain compatibility (for weird apps that might care).

Consider this scenario:

  1) main.queue has 20 events

  2) epoll_wait(maxevents=16) called by user

  3) splice all 20 events into unconsumed.queue, main.queue is empty

  4) put_user + dequeue on 16 events from unconsumed.queue
     # unconsumed.queue has 4 left at this point

  5) main.queue gets several more events enqueued at any point after 3.

  6) epoll_wait(maxevents=16) called by user again

  7) put_user + dequeue on 4 remaining items in unconsumed.queue

     We can safely return 4 events back to the user at this point.

     However, this might break compatibility for existing users.  I'm
     not sure if there's any weird apps which know/expect the event
     count they'll get from epoll_wait, but maybe there is one...

  8) We could perform a splice off main.queue to fill the remaining
     slots the user requested, but we do not know if the things we
     splice from main.queue at this point were just dequeued in 7.

     If we loaded the main.queue.tail before 7, we could safely splice
     into unconsumed.queue and know when to stop when repeating the
     put_user + dequeue loop.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ