cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: libcurl multithreading

From: Jamie Lokier <jamie_at_shareable.org>
Date: Fri, 22 Sep 2006 19:56:45 +0100

Török Edvin wrote:
> queue_timeout_for_current_thread(timeout)
> - when one of the timeouts expire, it either pthread_cancel, or
> pthread_kill that thread
> - waiting for a timeout doesn't need to use SIGUSR1, you can use
> pthread_cond_timedwait, sleep

Sounds good, I like it :)
That'll work, provided pthread_cond_timedwait() is available.
Other threads, not involved, don't have to do anything.

> use pthread_cond_timedwait(), that is use wait instead of signals.

Perhaps the Pthreads Primer document suggests the two signals because
pthread_cond_timedwait was not universally available at the time?

(I guess select() or poll() in the global thread, on a pipe or with
pthread_kill()+longjmp(), would do, in the unlikely event that
pthread_cond_timedwait() isn't available.)

> >And after all that, I'm not convinced it's safe anyway to interrupt
> >DNS resolution at an arbitrary time. How does curl (when
> >single-threaded) ensure that it's safe to longjmp from the SIGALRM
> >handler interrupting DNS resolution? What happens to file descriptors
> >that may have been opened by the resolver code, and resolver data
> >structures that might be in a temporary state?
>
> Good question. Would it be safe to kill a resolver thread?

Not portably, I'd guess.

> >I favour, instead of all that, just fork/exec process(es) to do DNS
> >resolving, communicating with it over a pipe. Kill processes that
> >don't reply in time. That can be easily thread-safe and portable.
>
> If pthread is not available, yes a better solution than longjmp anyway.

Even with pthreads, there is the matter of doing a DNS resolve in one
thread, while another thread does fork/exec for some application
reason. The fork/exec may inherit file descriptors created
temporarily by the resolver. There is no tidy way to handle this in a
library.

> >It's not perfect, because fork/exec races with file descriptor
> >creation in other threads (an unfortunate unix API fault), so those
> >child processes would sometimes inherit file descriptors that you'd
> >rather were closed.
>
> After forking, just close all file descriptors, other than the file
> descriptor used for communcation with the main process. Or use
> clone/unshare if you are under Linux.

I don't think you can't portably close all file descriptors.

The nearest thing to portable is to call getrlimit(RLIMIT_NOFILE) to
get the "maximum number of file descriptors allowed" and try closing
them all, but that's no good if it's a very large number (very slow),
or RLIM_INFINITY.

(Also, clone/unshare in Linux doesn't close file descriptors. It may
clone them).

-- Jamie
Received on 2006-09-22