For those of you that are interested, I have just released a new update for my brettt89/silverstripe-web docker image.

Changes • Added PHP 7.4 • Added more PHP modules • Added headers module • Added "fpm" and "cli" builds

❤️ (1)
  1. $ docker run --rm --interactive --tty \
  2. --volume $PWD:/app \
  3. --user $(id -u):$(id -g) \
  4. composer install

But you can also provide User and group UID's to the container to ensure file permissions are consistent if needed.


> Filesystem permissions > By default, Composer runs as root inside the container.


we install composer into our php container like so:

  1. # add Composer and disable root warning
  3. COPY --from=composer:1.10 /usr/bin/composer /usr/bin/composer

I used to do this as well. But when I started designing scalable systems, you start wanting to reduce the overhead.

E.g. Why does my web server need to have composer installed if I am only running it once.

Keeping services isolated allows for re-use of resources and makes scaling much easier (e.g. Kubernetes, etc)

^ My Opinion.


We don’t install it in the webserver container.

We have a separate PHP container and I think it’s fine there, it’s not a one-off thing for development.

Yes, I would agree with not having it in the container for production releases where the same containers are used to run the application within e.g. Kubernetes.


Yeah thats fair enough. In my local environment I have aliased commands like this to Docker services.


  1. alias composer="docker run --rm --volume ${PWD}:/app composer"

So I don't have Composer or PHP installed on my development environment (makes it easier for switching PHP versions on the fly).

I'm working on an environment at the moment that will allow me to define PHP versions using environment variables within the project (e.g. .env maybe) so that my aliased commands like composer will automatically select the correct PHP version etc.


we have proxy scripts that run these commands in the project containers, e.g. ./fe npm install runs npm install in the fe container, ./be composer install runs composer install in the B-E container, ./be sake dev/build etc…

a generic proxy script is on the roadmap and a shell helper that would pick up whether you are in project scope and run the command either locally or in the correct container based on the current scope and what the command is, would be the ultimate solution for us


Is there any reason why you have these services in your FE and BE containers? (compared to having then as separate container/image calls)

Am more just curious if you have found any blockers / benefits from doing it this way I haven't encountered or thought of myself.


we have a set of containers for each project, versioned in a repository that then wraps the project repository, to keep them separated, so each project has a docker-compose.yml and also all the container definitions, to be fully isolated.

reusability is done on docker level through common layers. we need to update multiple places when things change such as php version.

the reason for that was having control of all dependencies for each project separately. we are considering an alternative where we’d have a library of configurations in a single or smaller number of repositories.

👍 (2)

or is it the fact its a shared volume mean thats a non issue


(like i get the whole each thing idealy would be its own docker for the isolation factor)