This server was originally hosted on a rPi 4 with a traditional LAMP stack (Linux, Apache, Mysql/Mariadb, PHP) with WordPress on bare metal. It wasn’t very stable so I decided to rebuild it. I’d read some about Docker and read those apps work better (and faster) in containers. So I rebuilt it using Docker containers.
I also read several blogs/posts/articles suggesting the pi would run faster booted off USB. Hey, why not try that too! What follows is what I did, issues I ran into, and what I ultimately found.
First I wanted to implement my setup with a USB boot (faster, right?). So I went with a generic 32GB USB 3.0 drive (advertised write speed: 15-30mb/s; read speed: 50-80mb/s). I put the Raspian Lite on an SD card, then cloned the image to the USB drive. I booted my pi and updated. Then I loaded agnostics and ran a speed test with the following results:
run 1
prepare-file;0;0;20397;39
seq-write;0;0;11663;22
rand-4k-write;0;0;2;0
rand-4k-read;2;0;0;0
Sequential write speed 11663 KB/sec (target 10000) – PASS
Random write speed 0 IOPS (target 500) – FAIL
Random read speed 0 IOPS (target 1500) – FAIL
Run 2
prepare-file;0;0;1186;2
seq-write;0;0;13432;26
rand-4k-write;0;0;1065;266
rand-4k-read;1277;319;0;0
Sequential write speed 13432 KB/sec (target 10000) – PASS
Random write speed 266 IOPS (target 500) – FAIL
Random read speed 319 IOPS (target 1500) – FAIL
Run 3
prepare-file;0;0;2011;3
seq-write;0;0;1331;2
rand-4k-write;0;0;830;207
rand-4k-read;3756;939;0;0
Sequential write speed 1331 KB/sec (target 10000) – FAIL
Note that sequential write speed declines over time as a card is used – your card may require reformatting
Random write speed 207 IOPS (target 500) – FAIL
Random read speed 939 IOPS (target 1500) – FAIL
Clearly, this drive failed miserably on everything.
I attributed that to being generic, cheap, bulk USB drives (albeit allegedly USB 3.0). So I decided to try a more upscale drive and cloned the original image from the SD card to a Sandisk Ultra USB 3.0 128gb drive (advertised “up to” 100mb/s transfer speed.) Once again, I booted, updated, and loaded agnostics. On the better (a little bit better) drive I got this:
prepare-file;0;0;37130;72
seq-write;0;0;33694;65
rand-4k-write;0;0;1962;490
rand-4k-read;4680;1170;0;0
Sequential write speed 33694 KB/sec (target 10000) – PASS
Random write speed 490 IOPS (target 500) – FAIL
Random read speed 1170 IOPS (target 1500) – FAIL
Run 2
prepare-file;0;0;41011;80
seq-write;0;0;35083;68
rand-4k-write;0;0;654;163
rand-4k-read;4611;1152;0;0
Sequential write speed 35083 KB/sec (target 10000) – PASS
Random write speed 163 IOPS (target 500) – FAIL
Random read speed 1152 IOPS (target 1500) – FAIL
Run 3
prepare-file;0;0;36756;71
seq-write;0;0;43343;84
rand-4k-write;0;0;2010;502
rand-4k-read;4621;1155;0;0
Sequential write speed 43343 KB/sec (target 10000) – PASS
Random write speed 502 IOPS (target 500) – PASS
Random read speed 1155 IOPS (target 1500) – FAIL
While this drive did much better on sequential writes, it still failed random reads/writes (except barely squeaking by on the third pass for writes). Time to benchmark the SD card itself. So, same process – update, install agnostics, and run the test:
Run 1
prepare-file;0;0;20938;40
seq-write;0;0;14955;29
rand-4k-write;0;0;2731;682
rand-4k-read;7259;1814;0;0
Sequential write speed 14955 KB/sec (target 10000) – PASS
Random write speed 682 IOPS (target 500) – PASS
Random read speed 1814 IOPS (target 1500) – PASS
While the sequential write speed is significantly slower than the Sandisk, everything else is much quicker on the SD card. As my use-case will be more random than sequential, I opted to just stick with the SD card for this project. I plan to test a Kingston 256GB sata SSD in the near future. But I didn’t have it at the time I was doing this.
On to the actual project build.
First, I installed docker and docker-compose so I could set up the separate containers using a single .yml file.
# apd-get install docker docker-compose
Next, I set up the directory for the docker-compose project and set up a DocumentRoot folder for the Apache container. Within that folder I put a basic php placeholder script.
# mkdir -p linuxconfig/DocumentRoot
# echo “<?php phpinfo(); ?>” > linuxconfig/DocumentRoot/index.php
Instead of stepping through the incremental additions of the containers to the docker-compose.yml file, I will just post the entire file and address where I had some issues. The docker-compose.yml file is placed in my linuxconfig folder.
#cat docker-compose.yml
version: “3.1”
services:
php-httpd:
container_name: apache
image: php:7.3-apache
restart: always
ports:
– 8080:80
volumes:
– “./DocumentRoot:/var/www/html”
db:
container_name: mariadb
image: mariadb:10.4.20
restart: always
volumes:
– mariadb-volume:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: “no”
MYSQL_ROOT_PASSWORD: “fakerootpwd”
MYSQL_USER: ‘testuser’
MYSQL_PASSWORD: ‘faketestpwd’
MYSQL_DATABASE: ‘testdb’
phpmyadmin:
image: phpmyadmin
container_name: phpmyadmin
restart: always
depends_on:
– db
environment:
– “PMA_ARBITRARY=1”
ports:
– 8081:80
wordpress:
image: wordpress
container_name: wordpress
restart: always
ports:
– 80:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpresspwd
WORDPRESS_DB_NAME: wordpress
volumes:
– wordpress:/var/www/html
– ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
volumes:
mariadb-volume:
wordpress:
Important points:
The volumes: tag in the php-httpd container (apache) represents outside:container. /var/www/html inside the container points to ~/linuxconfig/DocumentRoot on the base file system. with the ports: tag, 8080 on the server will be pointed to port 80 in the container.
When running phpmyadmin I got tripped up trying to open testdb. I needed to be opening mariadb which provides access to all the databases.
under the phpmyadmin service, port 8081 on the server will point to port 80 in the container. Adding the “PMA_ARBITRARY=1” environment variable allows you to enter any database in the login screen of the app.
wordpress gave me a little grief. Again in this instance, port 80 on the server points to port 80 inside the container. so each container’s app is listening on port 80. But the server listens on the separate ports and redirects to the containers. Note the volumes tag – /var/www/html is inside the container. The second entry there is for an uploads.ini file. When restoring my backup from my bare metal web site to the containerized wordpress, I was thwarted by a max upload size. To fix that, I created an uploads.ini file for wordpress to load. The contents of the uploads.ini file:
file_uploads = On
memory_limit = 500M
upload_max_filesize = 500M
post_max_size = 500M
max_execution_time = 600
This file had to go in the /usr/local/etc/php/conf.d directory inside the wordpress container. To do that we have to start the containers.
# sudo docker-compose up -d –build
Once your containers have started, you need to open a shell into the container using docker exec. Provide the service name of your wordpress service in your docker-compose.yml file:
# docker exec -it wordpress /bin/bash
Then just create an uploads.ini file and paste the contents above into it. Save, exit the shell, and restart your containers.
# docker-compose down
# docker-compose up -d
Hopefully, you can now import a prior wordpress backup and be up and going!
So far I’m not finding the performance of the containerized setup to be as good as the bare-metal stack. But I learned a lot in getting it set up and certainly understand it might be my setup. I’ll continue to tweak as I learn.
If you have any tips or want to point out something I’ve missed, let me know!
