cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: libcurl multithreading

From: Török Edvin <edwintorok_at_gmail.com>
Date: Fri, 22 Sep 2006 23:07:59 +0300

On 9/22/06, Jamie Lokier <jamie_at_shareable.org> wrote:
> Török Edvin wrote:
> > queue_timeout_for_current_thread(timeout)
> > - when one of the timeouts expire, it either pthread_cancel, or
> > pthread_kill that thread
> > - waiting for a timeout doesn't need to use SIGUSR1, you can use
> > pthread_cond_timedwait, sleep
>
> Sounds good, I like it :)
> That'll work, provided pthread_cond_timedwait() is available.
> Other threads, not involved, don't have to do anything.

Would you like to implement it too? ;-)

>
> > use pthread_cond_timedwait(), that is use wait instead of signals.
>
> Perhaps the Pthreads Primer document suggests the two signals because
> pthread_cond_timedwait was not universally available at the time?
>
> (I guess select() or poll() in the global thread, on a pipe or with
> pthread_kill()+longjmp(), would do, in the unlikely event that
> pthread_cond_timedwait() isn't available.)

I guess.

>
> > >And after all that, I'm not convinced it's safe anyway to interrupt
> > >DNS resolution at an arbitrary time. How does curl (when
> > >single-threaded) ensure that it's safe to longjmp from the SIGALRM
> > >handler interrupting DNS resolution? What happens to file descriptors
> > >that may have been opened by the resolver code, and resolver data
> > >structures that might be in a temporary state?
> >
> > Good question. Would it be safe to kill a resolver thread?
>
> Not portably, I'd guess.

So, what should be done when the timeout expires?

> Even with pthreads, there is the matter of doing a DNS resolve in one
> thread, while another thread does fork/exec for some application
> reason. The fork/exec may inherit file descriptors created
> temporarily by the resolver. There is no tidy way to handle this in a
> library.

Use shared memory/message queues for communication and not pipes if
you don't want the pipe fd leaked. You can also set close-on-exec flag
on the pipe, so it won't get inherited over execve.

>

> I don't think you can't portably close all file descriptors.
>
> The nearest thing to portable is to call getrlimit(RLIMIT_NOFILE) to
> get the "maximum number of file descriptors allowed" and try closing
> them all, but that's no good if it's a very large number (very slow),
> or RLIM_INFINITY.

Unfortunately yes.

Edwin
Received on 2006-09-22