[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1233709678.19806.107.camel@yhuang-dev.sh.intel.com>
Date: Wed, 04 Feb 2009 09:07:58 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>
Subject: Re: [PATCH 2/3] crypto: Per-CPU cryptd thread implementation based
on kcrypto_wq
On Tue, 2009-02-03 at 17:10 +0800, Andrew Morton wrote:
> On Mon, 02 Feb 2009 14:42:20 +0800 Huang Ying <ying.huang@...el.com> wrote:
>
> > Original cryptd thread implementation has scalability issue, this
> > patch solve the issue with a per-CPU thread implementation.
> >
> > struct cryptd_queue is defined to be a per-CPU queue, which holds one
> > struct cryptd_cpu_queue for each CPU. In struct cryptd_cpu_queue, a
> > struct crypto_queue holds all requests for the CPU, a struct
> > work_struct is used to run all requests for the CPU.
> >
> > ...
> >
> > +int cryptd_init_queue(struct cryptd_queue *queue, unsigned int max_cpu_qlen)
> > +{
> > + int cpu;
> > + struct cryptd_cpu_queue *cpu_queue;
> > +
> > + queue->cpu_queue = alloc_percpu(struct cryptd_cpu_queue);
> > + if (!queue->cpu_queue)
> > + return -ENOMEM;
> > + for_each_possible_cpu(cpu) {
> > + cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
> > + crypto_init_queue(&cpu_queue->queue, max_cpu_qlen);
> > + INIT_WORK(&cpu_queue->work, cryptd_queue_worker);
> > + }
> > + return 0;
> > +}
>
> Could be made static, I believe.
OK. I will make it static.
> > +void cryptd_fini_queue(struct cryptd_queue *queue)
> > +{
> > + int cpu;
> > + struct cryptd_cpu_queue *cpu_queue;
> > +
> > + for_each_possible_cpu(cpu) {
> > + cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
> > + BUG_ON(cpu_queue->queue.qlen);
> > + }
> > + free_percpu(queue->cpu_queue);
> > +}
>
> Ditto.
As above.
> > +int cryptd_enqueue_request(struct cryptd_queue *queue,
> > + struct crypto_async_request *request)
> > +{
> > + int cpu, err;
> > + struct cryptd_cpu_queue *cpu_queue;
> > +
> > + cpu = get_cpu();
> > + cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
> > + err = crypto_enqueue_request(&cpu_queue->queue, request);
> > + queue_work_on(cpu, kcrypto_wq, &cpu_queue->work);
> > + put_cpu();
> > +
> > + return err;
> > +}
>
> Ditto?
As above.
> > +static void cryptd_queue_worker(struct work_struct *work)
>
> A few comments explaining the code wouldn't kill us...
OK. I will add some comments. This the function for work_struct in
cryptd_cpu_queue, it does real encryption/decryption.
> > +{
> > + struct cryptd_cpu_queue *cpu_queue;
> > + struct crypto_async_request *req, *backlog;
> > +
> > + cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
> > + /* Only handle one request at a time to avoid hogging crypto
> > + * workqueue */
>
> Not sure what that means.
cryptd is not the only user of kcrypto_wq (now, chainiv uses it too),
and there may be a lot of requests in cryptd_cpu_queue, to reduce the
latency of other kcrypto_wq user, only one request is processed in
work_struct function, then the work_struct will be inserted again.
> > + preempt_disable();
> > + backlog = crypto_get_backlog(&cpu_queue->queue);
> > + req = crypto_dequeue_request(&cpu_queue->queue);
> > + preempt_enable();
>
> This function is run by a per-cpu thread, so it cannot be migrated to
> another CPU by the CPU scheduler.
>
> It is quite unclear what this preempt_disable() is needed for. Please
> either remove it or provide adequate comments.
preempt_disable is used to protect cpu_queue->queue. It can be accessed
by cryptd_enqueue_request(). When cryptd_queue_worker() is running, it
can be preempted by cryptd_enqueue_request() on same CPU.
> > + if (!req)
> > + goto out;
> > +
> > + if (backlog)
> > + backlog->complete(backlog, -EINPROGRESS);
> > + req->complete(req, 0);
> > +out:
> > + if (cpu_queue->queue.qlen)
> > + queue_work(kcrypto_wq, &cpu_queue->work);
>
> Again, unclear and needs commentary.
>
> If we come here via the `goto out;' path above, this CPU queue has no
> more work do do, I think? If so, why does it requeue itself?
If there is no more work to do, the cpu_queue->queue.qlen will be zero,
so it will not request itself. If we come here via another path, we may
need do that.
Best Regards,
Huang Ying
Download attachment "signature.asc" of type "application/pgp-signature" (198 bytes)
Powered by blists - more mailing lists