Message of the day:
A virtual pony is still a pony
All things virtual
FPM is fastCGI iirc. CGI is ... antiquated tech, for reasons of large overheads (causing slowdowns). iirc.
FPM is a process manager to manage the FastCGI SAPI (Server API) in PHP.
Yeah, but CGI is a different thing from FastCGI :P
Correct 🙂 . But FPM != fastCGI 😛
I haven't had a chance to test CGI vs FPM so far, but FPM vs mod_php.... FPM is way better.
Apache + PHP-fpm / CGI
Well FPM is faster than PHP as a standard CGI...so say the FPM folks
I would recommend using either Apache + PHP-fpm / CGI OR nginx + fpm / cgi.
Gone are the days where 3 layers are needed and all it does is add additional layers of complexity which are often not required.
Also, if you really needed Apache then as I think @brett.tasker mentioned last week, Apache + php-fpm is more performant than mod_php - again, I haven't had personal experience with that.
On first glance your setup looks potentially overly complicated. Like, if you really needed what Apache and nginx are currently providing you, why not use nginx throughout and have one less software package to deal with?
My opinions are also based on my own experiences 🙂
Hey Patrick - in a word "No" - it was only ever an impression.
> I'm surprised Apache Httpd is listed, I was always under the impression Nginx + php-fpm where the "ideal" pairing. (In terms of perf) @theruss have you seen any tests on this yet? I'll be leveraging both NGINX + Apache (mod_php, so mpm_prefork instead of PHP as an FPM). I know PHP performance in my own comparison tests is roughly the same if not potentially faster in Apache, but I wouldn't be surprised at all if NGINX were lightening fast with file serving (apache seems a bit slow). In my case, I am not worried about static assets being served slowly as they get cached for 30 days via intermediate CDN (e.g. Cloudflare, but Akamai in my case).
I'll use NGINX as a sort of complex rule processing module which will reverse proxy to Apache which is running
mod_php. That way I can have application specific rules (bundled with Apache in the same container) and then environment specific rules and routing which will allow/disallow access and etc in NGINX. This is particularly useful since Apache's access control is a complete clusterfuck if you want to do anything moderately advanced, like, basic HTTP Auth but allow IP's from a whitelist, but then STILL disallow access to special back-end files (like templates) :facepalm-skype: Just layering in NGINX for that first outer layer (e.g. basic HTTP Auth + IP whitelist) solves that problem since you aren't dealing with those overly complicated rule merging issues in apache.
> Be careful using NFS or Network based storage for hosting your codebase on. The amount of IOPS required for application source code when running a website is quite high (E.g. each request reads many files, many times). So you can sometimes cause yourself bottlenecks in the IOPS by having your codebase mounted on a Network Storage and attached to multiple mount points (As IOPS is usually throttled / defined at the Network Storage rather than mount point, so it is divided among mounts).
Right, particularly if you're hosting code there. I personally wouldn't recommend that. However, enabling OPCache (which should be standard) will help alleviate disk I/O dramatically @brett.tasker. Either way, SilverStripe can still probably choke the NFS mount even if you're using in-memory caching via opcache. In my case, I was only recommending NFS for accessing/serving assets as a sort placeholder in lieu of a more redundant and highly available "cloud" based solution like Flysystem (per blog: https://www.silverstripe.org/blog/utilising-amazon-s3-to-supercharge-your-silverstripe-hosting/)
Scaling your hosting on the fly has never been easier. Chief Architect at Moosylvania, Joe Madden, explains Flysystem and the SilverStripe S3 module, offering agencies more flexibility to host on new platforms and maintain high performance.Hide attachment content
Yep, I agree with everything 🙂 .
In regards to caching and NFS, you can sometimes run into issue in which NFS / OPCache caching can affect how often your guest updates their information compared to the host.
noac settings can help reduce the caching rate of files on NFS mounts meaning the guest will be updated with changes much sooner than usual.
That's interesting. I think i'll be 100% avoiding any/all storage and execution of PHP over NFS. In my case, whenever NFS is involved, it's purely to mount an assets volume onto a k8s pod that has all PHP stored in it's ephemeral container storage