curl-library
Re: Running out of sockets
Date: Fri, 03 Dec 2004 13:16:14 +0100
Rob Booth <rbooth_at_indyme.com> writes:
> My application needs to send data (via FTP) to multiple machines (500+).
> Once we got to the 500+ mark we started getting an "Failed to upload file
> 'x' couldn't create socket" error. Looking into my application I found that
500 uploads mean 500 control connections and 500 data connections. The
usual limit of filedescriptors is 1024 and there is also stdin, stdout
and stderr. So, the 511th upload should already fail.
If you mus increas the number of filedescriptors you may use. But I
would change to another protocol, ftp is evil.
> using libCURL to make this FTP transaction results in 4 sockets. Other
> actions being taken by my application take this socket count up to 13 per
> machine being transferred to, not counting other overhead sockets that are
> used for database and http connections. Now each transfer takes less than 1
> second and I'm setup to distribute to 8 machines at a time (forked
> processes).
Are all those sockets alive or just lingering around harmlessly for
their reuse timeout?
> Looking at my network statistics I see my sockets end up hanging around for
> approximately 1 minute from the time of release. What I want to know is, is
> there a way to force my operating system (Linux - Red Hat) to release those
> connections as soon as I'm done with them? I'm sure this isn't really a
> libCURL issue, but I thought this would be the best place to get an answer.
Set the socket to be reused after closing: (translating to C left for
the reader)
void Socket::reuse() { // Reuse address
unsigned int one = 1;
if (::setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)) == -1) {
int err = errno;
fprintf(stderr, "Socket::Socket: setsockopt SO_REUSEADDR failed: %m\n");
throw err;
}
}
> Thanks,
> Rob
MfG
Goswin
Received on 2004-12-03