This is an educational project. Please contact me if you need further details. It was implemented with Python 3.4 and with
The goal is to build caching proxy server, able to transfer HTTP requests between a client and a server and speed up responses by caching resources. The server has to be able to handle requests in parallel to optimize performance.
The server is run on a selected port with a selected storage type. Upon receiving a request from a client (usually a browser like Firefox or Opera) the header is modified, the chunked encoding is decoded (if needed) and request is forwarded to an origin server. Persistent connection is kept alive and HTTP pipelining is implemented. When response is received it is sent to the client and saved to either filesystem or PostgreSQL database depending on the user preference. Cached data is stored for a configurable timeout. When client requests the same resource, the data is retrieved from cache if not expired yet. Diagnostics is printed to console and a separate log is being written for each worker process.
Browsers fire many requests simultaneously, so a proxy has to tackle with it and using threads/processess is the most effective way. Python threads does not allow for true parallellism, due to GIL prevents simultaneous instructions running by two threads. (There would be no GIL if using IronPython or Jython interpreters, but I prefer to stay with CPython for this task) It should be noted that threads (even with the presence of GIL) might be sufficient, given most of the operations are I/O bound. For this project the multiprocessed architecture was chosen to fully benefit from true parallellism. Given the overhead on spawning new processess (done once only) and interprocess communication (one socket is being piped for each request) is really low in this project, this was considered the best option.
Internally the server implements pre-forked model in a sense that child worker processes are created on server start. The difference with the classic (Apache) model is that the master process remains the only listener. It accepts connections, queues them and sends an open sockets to a child worker process. Although sockets are not picklable (you cannot successfully send them to a child process through a pipe) they can be transferred through a unix domain socket. After a child process is done with the socket it is returned (if not closed by peer) to a master process's queue.
Child processes are spawned once by the master process on the server strartup to minimize overhead. The master process then changes their states when needed and picks up an idle process when data is availbale to be read on one of sockets.
This was a complex project which took time for debugging. At the end, it works transparently regarding performance (Core i5/Linux) for single client's user while filling cache and speeds up page loads when using cached data. (In this test both server and client are run locally on the same PC)