curl-users
using complex websites with cURL
Date: Tue, 21 Aug 2001 18:18:58 +0300 (EEST)
I wrote Daniel some time ago when I found about cURL and its features. I
started learning perl, after finding out that from a perl script (using
regexp) it is possible to extract data from text files. After reading some
tutorials, i found about libwww modules and after some time I was able to
convince my sysadmin to install those CPAN modules on our server from
here. Anyway, to make a long story short, he reason why I learnt perl is
because I was looking for somebody special to spend my life with. I was
looking for love on the Internet, because nowdays distances are easily
to overcome and there are lots of great persons out there... I wanted to
find someone to be with and due to perl-lwp, I did manage to make my
dreams come true. I married on 7th July with a lady that lives on another
side of the world... all these due to some quite primitive web
agents/crawlers that played with HTML tags and got some info from the text
files (that were downloaded through libwww).
Unfortunately, when I tried to do more complex web browsing through
libwww scripts such as being able to login to sites that use cookies,
(+redirects, forms, etc) I failed , despite the numerous questions I've
put on the libwww mailing list. My programming skills are not too high, at
least not when looking too libwww/perl sources, because I don't know
anything about Linux sockets and network/internet low level communication.
Due of these (and others) lacks, I wasn't able to actually modify the
sources to suit my needs, nor I have been able to find the bug. But
there's a guy that tried to help me some time ago (Tim Allwine) who
actually discover where was the problem in libwww sources... and it seems
that the problems were somehow solved in the new version of libww, but I
wasn't able to install a newer version and test some new scripts.
Can anybody help me with some URLs and/or some documentation on Internet
programming ? I am not talking about PHP/MySQL or perl tools for
developping web sites. I want to learn more about how TCP/IP
(HTTP,FTP,MAIL) works and how to build scripts/programs that are able to
"talk" to eachother, mainly on being able to browse through Internet,
because I want to start working on building some agents that would help
people like me find a (better) job, or a special person to connect to, or
find a specific product, service, or information. As I told Daniel some
time ago, I want to build some scripts that would send/fill a person's
information (CV/resume) to 1000 (less or more) of specific internet web
sites (ejobs.com, ejobs, career.yahoo, excite, monster, etc). For being
able to do this task, I need to find an efficient way to simulate/do
browsing through (perl or whatever) scripts, so that I can add a new user,
login in, fill forms, follow redirect.... basically, I need to play with
HTTP objects and browse through a script as if I was doing from
IE/Netscape.
How were cURL and curl libraries made ? I mean, I wanna learn more about
this, I wanna be able to look (and understand sources), I need to better
understand TCP/IP programming (especially HTTP) and communication. Where
are the sources of cURL ? What language was used to make it ? I suppose
that C++, but where can I learn more about special network objects and
network/system programming details ?
Later I will send you a practical description of a typical problem that I
have... Thank you for reading this...
-- Nick... ____________________________________________________________________________ MAY ALL OUR CHOICES BE GOOD ONES ! _____________________________________________________________________________Received on 2001-08-21