83% of requests (globally) to our apex homepages (bbc.co.uk/ and bbc.com/) are from "Python Requests".

That's 84 million requests per day.

In the words of Bricktop...

#BBC #WebStats #PythonRequests #BrickTop

Cybersecurity cert prep: Lab 13 (Path Traversal) — sending Python requests through a Burp Suite proxy

https://peertube.eqver.se/w/3hkPcyW7dpkDyaJUNWGSnf

lt3ua_012_en

PeerTube

Cybersecurity cert prep: Lab 18 (Auth) — proper password guessing with Burp Intruder and Python requests

https://peertube.eqver.se/w/x6Ltdd5dxKhoyJC3mCV2yu

lt3ua_017_en

PeerTube

State of client side HTTP in #python

- stdlib http.client - HTTP/1.1 only, docs recommend to use requests instead.
- requests - poorly maintained, in 2020 stopped working when servers deprecated older TLS versions. HTTP/1.1 only, sync only
- #httpx - comparatively slow and API design is driven by compatibility with browsers. Some users plug-in aiohttp for better performance. Supports HTTP/2 but it's discouraged as not optimized.
- #aiohttp - good API, both client and server but seems more focused on the server side; HTTP/1.1 only.
- #niquests - async fork of requests with HTTP/2, but uses forked urllib3 with the same package name as the original, which messes up deployment.
- #aioquic - client & server, HTTP/3 only

🤦

#pythonRequests #programming

Tak, tak, zdecydowanie należy sforkować #PythonRequests, i przy okazji kilka innych ważnych bibliotek Pythona jako zależności, i jeszcze dodać kilka własnych wynalazków. No i zdecydowanie należy przy tym nadpisywać oryginalne biblioteki. Co może pójść nie tak?

https://pypi.org/project/niquests/

#Python

Client Challenge

Yes, yes, please fork #PythonRequests and a bunch of other high-profile #Python libraries as its dependencies, and add some more #NIH dependencies to that. Oh, yes, and definitely overwrite the original packages in the process! What could possibly go wrong?

https://pypi.org/project/niquests/

#packaging

Client Challenge

While reviewing a merge request from a coworker who was debugging a mysterious behavior where we would get decompressed tar.gz files from some origins when using #PythonRequests, I've figured out that:

  • The default #Apache2 config serves gzip files without touching them, but setting both Content-Encoding: gzip and Content-Type: application/x-gzip on the response;
  • The #HTTP RFCs should make conforming clients interpret that as "this is a gzipped representation (Content-Encoding) of a gzip file (Content-Type). You should gunzip it before presenting it to the user";
  • #Firefox hardcodes a workaround;
  • This was already reported as a dubious behavior upstream in 2002!

At this point this seems so entrenched that I guess all HTTP client libraries should really be implementing the workaround...

package/utils: Prevent content automatic deflate in download function (!488) · Merge requests · Platform / Development / swh-loader-core · GitLab

The python requests library automatically deflate downloaded content bytes if the response header content-encoding is set to a supported encoding. This behavior can make a file...

GitLab