> I'm surprised Apache Httpd is listed, I was always under the impression Nginx + php-fpm where the "ideal" pairing. (In terms of perf) @theruss have you seen any tests on this yet? I'll be leveraging both NGINX + Apache (mod_php, so mpm_prefork instead of PHP as an FPM). I know PHP performance in my own comparison tests is roughly the same if not potentially faster in Apache, but I wouldn't be surprised at all if NGINX were lightening fast with file serving (apache seems a bit slow). In my case, I am not worried about static assets being served slowly as they get cached for 30 days via intermediate CDN (e.g. Cloudflare, but Akamai in my case).
I'll use NGINX as a sort of complex rule processing module which will reverse proxy to Apache which is running
mod_php. That way I can have application specific rules (bundled with Apache in the same container) and then environment specific rules and routing which will allow/disallow access and etc in NGINX. This is particularly useful since Apache's access control is a complete clusterfuck if you want to do anything moderately advanced, like, basic HTTP Auth but allow IP's from a whitelist, but then STILL disallow access to special back-end files (like templates) :facepalm-skype: Just layering in NGINX for that first outer layer (e.g. basic HTTP Auth + IP whitelist) solves that problem since you aren't dealing with those overly complicated rule merging issues in apache.
> Be careful using NFS or Network based storage for hosting your codebase on. The amount of IOPS required for application source code when running a website is quite high (E.g. each request reads many files, many times). So you can sometimes cause yourself bottlenecks in the IOPS by having your codebase mounted on a Network Storage and attached to multiple mount points (As IOPS is usually throttled / defined at the Network Storage rather than mount point, so it is divided among mounts).
Right, particularly if you're hosting code there. I personally wouldn't recommend that. However, enabling OPCache (which should be standard) will help alleviate disk I/O dramatically @brett.tasker. Either way, SilverStripe can still probably choke the NFS mount even if you're using in-memory caching via opcache. In my case, I was only recommending NFS for accessing/serving assets as a sort placeholder in lieu of a more redundant and highly available "cloud" based solution like Flysystem (per blog: https://www.silverstripe.org/blog/utilising-amazon-s3-to-supercharge-your-silverstripe-hosting/)
Scaling your hosting on the fly has never been easier. Chief Architect at Moosylvania, Joe Madden, explains Flysystem and the SilverStripe S3 module, offering agencies more flexibility to host on new platforms and maintain high performance.Hide attachment content
Yep, I agree with everything 🙂 .
In regards to caching and NFS, you can sometimes run into issue in which NFS / OPCache caching can affect how often your guest updates their information compared to the host.
noac settings can help reduce the caching rate of files on NFS mounts meaning the guest will be updated with changes much sooner than usual.
That's interesting. I think i'll be 100% avoiding any/all storage and execution of PHP over NFS. In my case, whenever NFS is involved, it's purely to mount an assets volume onto a k8s pod that has all PHP stored in it's ephemeral container storage
And for sites that don't have very complex needs, there are still things like knative (or in GCP, "Cloud Run") but those tend to be stateless; also you would still need to host your uploaded assets in something like cloud storage (or filestore if you really wanna pay like 3-4x more per month than a basic cloud run instance could cost)
I think there's a ton of potential there. Heck, there's already a lot of potential realized (the momentum is real). This is primarily oriented toward sites that have enough load/traffic, or at least some expectation of high availability, but: For those types of websites, this is very much the future (i.e. k8s instead of more old fashioned VM's, hard as it is to refer to that as "old fashioned")
and one major reason why is the inherent benefits of containerizing; i.e. OPCache can be indefinite (as long as you ship your code with your image). also an important learning! for performance reasons and availability, I found it best not to use k8s just for infrastructure, but to treat your app as a container as well instead of just containerizing the infra and still using SSH/SFTP/etc to manually upload your files to an NFS server or something
cool thing is I found a good performance bump (even still on PHP 5.6 in old SS 3.x!) going from a fairly high performance 8GB virtual machine instance in RackSpace to a fairly mid-range VM cluster in GCP, shaving like 100ms or more from pages that take 300ms - 500ms to generate (after all caches are generated).
ok -- I found it to be pretty indispensable. I think even the inventor of of k8s said that maybe "all 100% declarative" (paraphrasing) wasn't maybe the best route, given all the tools that are needed to launch more complex infra. however, he's still a huge proponent of declarative configuration, which is certainly still the right place to be (just for me as an important intermediate step to actually provisioning)
but yeah, if it's not too inconvenient, would still love to learn from your experience
In my case, I need to generate YML from templates (forget helm or kustomize... ) to deploy to like 5 separate clusters across 2 projects, each in a diff namespace