Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
The response to the http post request is split into parts
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Александр via curl-and-python <curl-and-python_at_cool.haxx.se>
Date: Sun, 14 Mar 2021 21:00:22 +0300
Greetings.
I am trying to use the curl example from the repository
https://github.com/pycurl/pycurl/blob/master/examples/retriever-multi.py
Instead of saving to file, I use WRITEFUNCTION
When the response from the server is large, it is split into parts. I
am probably doing something wrong.
Dividing the answer into parts does not allow me to work with json.
Please tell me how can I use WRITEFUNCTION to handle large json?
***
Output of the program:
BUF - {"response":[{"id":6001,"name":"Концерт \"Picasso\" в арт-клубе
\"Манхэттен\"","screen_name":" len - 15380
BUF - 83cR49qqrd-NquOCUFkND4FunHI.jpg?size=50x0&quality=96&crop=126,80,613,613&ava=1","photo_100":"https:\
len - 113
BUF - erapi.com\/s\/v1\/ig2\/JLMiTVzQcB3-uI8oP9IVu5NCzXBBsvcwk8INNt3Zz_gKg17c73tNt5Ierfc-KCfRREi4eZMWXtgtU
len - 15810
BUF - /FkG4FWgSBPhmdTfH6SHCtO4Ial85objK6iRlPBUE0o2DV1KsRyhbGbCpXsjo9y0Kuykg5svRpnj95obslrvtl3GV.jpg?size=2
len - 15896
BUF - me":"club6124","is_closed":1,"type":"group","members_count":0,"photo_50":"https:\/\/vk.com\/images\/
len - 15924
***
def check_data(buf):
buf_utf = buf.decode("utf-8", "ignore")
print(f'BUF - {buf_utf[:100]} len - {len(buf_utf)}\n')
***
for i in range(num_conn):
c = pycurl.Curl()
c.setopt(pycurl.URL, URL)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, 300)
c.setopt(pycurl.TIMEOUT, 3000)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(pycurl.WRITEFUNCTION, check_data)
m.handles.append(c)
freelist = m.handles[:]
num_processed = 0
while num_processed < num_urls:
# If there is an url to process and a free curl object, add to multi stack
while queue and freelist:
post = queue.pop(0)
c = freelist.pop()
c.setopt(pycurl.POSTFIELDS, post)
m.add_handle(c)
# Run the internal curl state machine for the multi stack
while 1:
ret, num_handles = m.perform()
if ret != pycurl.E_CALL_MULTI_PERFORM:
break
# Check for curl objects which have terminated, and add them to the freelist
while 1:
num_q, ok_list, err_list = m.info_read()
for c in ok_list:
m.remove_handle(c)
freelist.append(c)
for c, errno, errmsg in err_list:
c.fp = None
m.remove_handle(c)
freelist.append(c)
num_processed = num_processed + len(ok_list) + len(err_list)
if num_q == 0:
break
***
_______________________________________________
https://cool.haxx.se/cgi-bin/mailman/listinfo/curl-and-python
Received on 2021-03-14
Date: Sun, 14 Mar 2021 21:00:22 +0300
Greetings.
I am trying to use the curl example from the repository
https://github.com/pycurl/pycurl/blob/master/examples/retriever-multi.py
Instead of saving to file, I use WRITEFUNCTION
When the response from the server is large, it is split into parts. I
am probably doing something wrong.
Dividing the answer into parts does not allow me to work with json.
Please tell me how can I use WRITEFUNCTION to handle large json?
***
Output of the program:
BUF - {"response":[{"id":6001,"name":"Концерт \"Picasso\" в арт-клубе
\"Манхэттен\"","screen_name":" len - 15380
BUF - 83cR49qqrd-NquOCUFkND4FunHI.jpg?size=50x0&quality=96&crop=126,80,613,613&ava=1","photo_100":"https:\
len - 113
BUF - erapi.com\/s\/v1\/ig2\/JLMiTVzQcB3-uI8oP9IVu5NCzXBBsvcwk8INNt3Zz_gKg17c73tNt5Ierfc-KCfRREi4eZMWXtgtU
len - 15810
BUF - /FkG4FWgSBPhmdTfH6SHCtO4Ial85objK6iRlPBUE0o2DV1KsRyhbGbCpXsjo9y0Kuykg5svRpnj95obslrvtl3GV.jpg?size=2
len - 15896
BUF - me":"club6124","is_closed":1,"type":"group","members_count":0,"photo_50":"https:\/\/vk.com\/images\/
len - 15924
***
def check_data(buf):
buf_utf = buf.decode("utf-8", "ignore")
print(f'BUF - {buf_utf[:100]} len - {len(buf_utf)}\n')
***
for i in range(num_conn):
c = pycurl.Curl()
c.setopt(pycurl.URL, URL)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, 300)
c.setopt(pycurl.TIMEOUT, 3000)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(pycurl.WRITEFUNCTION, check_data)
m.handles.append(c)
freelist = m.handles[:]
num_processed = 0
while num_processed < num_urls:
# If there is an url to process and a free curl object, add to multi stack
while queue and freelist:
post = queue.pop(0)
c = freelist.pop()
c.setopt(pycurl.POSTFIELDS, post)
m.add_handle(c)
# Run the internal curl state machine for the multi stack
while 1:
ret, num_handles = m.perform()
if ret != pycurl.E_CALL_MULTI_PERFORM:
break
# Check for curl objects which have terminated, and add them to the freelist
while 1:
num_q, ok_list, err_list = m.info_read()
for c in ok_list:
m.remove_handle(c)
freelist.append(c)
for c, errno, errmsg in err_list:
c.fp = None
m.remove_handle(c)
freelist.append(c)
num_processed = num_processed + len(ok_list) + len(err_list)
if num_q == 0:
break
***
_______________________________________________
https://cool.haxx.se/cgi-bin/mailman/listinfo/curl-and-python
Received on 2021-03-14