preloader

Falcon 3 Cookiecutter Template & Benchmarks - Pozetron Inc

As we like to do here at Pozetron we’ve been keeping our ear to the ground for the sound of seismic shifts in technology. We’re happy to report that we heard the rumble and came running. We are of course talking about… Async Python.

We are big fans of the Falcon Web framework which is used throughout the company to power our microservices. While we were hitting refresh on the Changelog for the 17th time before our morning coffee we noticed something different. Did you spot it?

That’s right! ASGI support landed right in our playground.

Since everyone knows that Async Python is Faster we wanted to update our Falcon Cookiecutter Template right away. Without further ado we hereby present to you: https://github.com/pozetroninc/cookiecutter-falcon3

It provides a WSGI version running on top of Bjoern as well as an ASGI version running on Uvicorn.

Since we make some highfalutin claims of “blazingly fast” performance in the README we should probably back that up with some completely scientific benchmarks. For this we used the Health Check route which is essentially the Plaintext benchmark that just returns a hard-coded string.

~= Run One =~

$ wrk -t6 -c400 -d30s http://127.0.0.1:8000/healthz
Running 30s test @ http://127.0.0.1:8000/healthz
  6 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    74.06ms    5.43ms 161.61ms   90.68%
    Req/Sec     0.89k   305.93     1.33k    67.52%
  160217 requests in 30.05s, 19.56MB read
Requests/sec:   5332.21
Transfer/sec:    666.53KB

~= Run Two =~

$ wrk -t6 -c400 -d30s http://127.0.0.1:8000/healthz
Running 30s test @ http://127.0.0.1:8000/healthz
  6 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    34.79ms    4.22ms  90.84ms   93.18%
    Req/Sec     1.90k   207.99     2.64k    88.00%
  341417 requests in 30.04s, 41.68MB read
Requests/sec:  11364.53
Transfer/sec:      1.39MB

~= Run Three =~

$ wrk -t6 -c400 -d30s http://127.0.0.1:8000/healthz
Running 30s test @ http://127.0.0.1:8000/healthz
  6 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.39ms   36.03ms   1.06s    99.30%
    Req/Sec     5.12k   667.27     9.90k    78.39%
  917265 requests in 30.04s, 85.73MB read
Requests/sec:  30531.08
Transfer/sec:      2.85MB

Did you notice when we cut over to ASYNC SPEED? (Yes that’s a trick question)

Here’s the thing about benchmarks. You have to take them all with a pound of Salt & Vinegar. The results you see above don’t tell the story that you expect them to. Run One is the first to use Uvicorn. Both Run One and Run Two use it in fact. Run Three is good old reliable WSGI running on top of the battle tested bjoern.

The difference between Run One and Run Two is actually the difference from changing the log level for Uvicorn from "info" to "critical". Those who have been around for a while know that writing to the terminal from Python can cost quite a penalty, a penalty that Bjoern out of the box refuses to pay. Removing that cost from the Uvicorn numbers brings them closer to the Bjoern numbers but it was never even close.

So in summary, when you are making a decision on something new, make sure you are making it fully informed and for the right reasons. Your assumptions about the cool new tech might actually turn out to be wrong.

Free Registration

No credit card required. Limited time only.

Register Free