Docker nginx php-fpm same container

Im heavily investigating this, and like many people, im discovering that in 2020 there doesn't seem to be so much logic in separating tightly coupled webserver + PHP + app process/code from each other:

Two main arguments for separation are scalability and separation of concerns (one process per container).

Logically the webserver should only act as webserver and distribute traffic to PHP nodes behind a service that uses a LB.

This way one container/service serves webserver purpose and serves static files, while proxying the php requests to the php pod which only concerns with PHP.

And webserver and php pods can scale differently - ie you may need 50-100 PHP pods for a workload, but only 3-4 webserver nodes. So you save on resources. Great!

Except in respect to separation of concerns...

If the webserver is not going to serve requests to different apps, there is little logic for separation of concerns to different containers - leave aside services. If the app is coupled with the webserver, you just end up spawning extra webserver pods, with accompanying service and load balancers to go with them.

On top of that you will have to distribute the app code to both containers or mount them from a NFS bind, or non NFS mount. You will have to deal with all the relevant access and permission stuff, if you put the app code in both pods you will have to deal with static + dynamic file separation, or even end up having to distribute both type of files to both pods. You could put two containers and make one of them mount directories on the other and that would work, but that actually puts two containers in the same logical host in regard to network, host etc so you are still combining them. This still provides some separation of concerns though. But the communication in between the two containers in the same pod is a concern - it has to either happen through intra-host network over loopback, which uses network stack and is less performant. Maybe sockets could work.

Even worse if your app may require webserver being present in the same container - for example if you are doing ssl termination and have to carry over the visitor's actual ip to the php pod, you may end up having to jump through some hoops. While both webserver and php are available in the same container, this can be done through various means.

When everything is in one container, none of these is necessary. Everything is neatly packed in the same place, no need to track different deployments, services, app codes, replication controllers.

This kind of singular container/deployment is very advantageous today especially if you are using ingress-nginx or nginx-ingress and hosting many different apps/sites in the same cluster. Each app/site is contained in its own container (imagine different WordPress sites), each site is completely contained in its permissions, access, deployment, version, requirements etc.

So much that you could even allow apps/sites to customize their runtime environments by providing configuration files for webserver or php from NFS shares or other mounts - ie .htaccess, php.ini etc. Which is a must for hosting different apps since people tend to have different requirements.

In respect to performance

Today NGINX or Apache require very little resources with event based request handling. For example in my benchmarks, i saw Apache 2 with event mpm requiring only 3-4 MB memory, and a grand total of ~2-3% cpu (out of 1000m requested cpu, ie 1 vcpu) while handling 50-100 concurrent requests per second while serving WordPress.

The weights of webserver vs PHP pods would differ depending on the app or website of course, and there could definitely be apps which would require such a separation, but for most common web workloads this does not seem necssary.

So spawning a few Apache pods and then spawning hundred or more PHP pods does not seem to bring much gain. Aside from creating internal cluster network traffic as pods need to communicate with each other and also you create more complication in the app configurations (deployments, services, load balancers etc).

Another disadvantage of separating containers is that you create two times the services per app. This will reflect on the limits of the cluster you create since services, pods require ip addresses.

When webserver + PHP +app are in the same container, the webserver can just communicate with php-fpm via file sockets. Which is faster, and creates less network overhead even on loopback interface inside the container, leave aside creating network traffic inside the cluster. Internal network and cpu load saved.

In conclusion...

For specific purposes, with one Apache 2 or NGINX server distributing requests to different apps on different PHP-FPM clusters, separating concerns would likely be a necessity.

But in the era of ingress-nginx, this really doesn't seem necessary.

Especially reduced complexity, portability, saved performance and internal cluster noise, better customization and other benefits gained from packaging tightly coupled server + PHP + app deployments in one single container seem to be too good to pass.

Furthermore, this even enables shared-hosting like formats in Kubernetes clusters, which seems to open many possibilities.

I would very much appreciate any input from anyone who had experience in dealing with this choice in between selecting singular container for tightly coupled app vs multiple cointainers, or anyone who did benchmarks on these.