Install Docker and deploy the first service

To install Docker I followed this guide (archived page).

It took some time (5-10 minutes) and the CPU went up to 50% roughly.

Instead of manually creating a non-root user as explained in the article I executed this command that was suggested at the end of previous install: dockerd-rootless-setuptool.sh install (the first time failed and I had to install the missing package).

At the end of it, I had to add the export export DOCKER_HOST=unix:///run/user/1000/docker.sock to the .bashrc.

To double-check that everything was correctly installed, I executed docker info and docker run hello-world.

If you see something like the below it means that everything is working:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
7050e35b49f5: Pull complete 
Digest: sha256:aa0cc8055b82dc2509bed2e19b275c8f463506616377219d9642221ab53cf9fe
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Now I want to install also docker-compose and I ran (from this article):

sudo apt-get install libffi-dev libssl-dev
sudo apt install python3-dev
sudo apt-get install -y python3 python3-pip

sudo pip3 install docker-compose

And now everything is set up!

Before installing anything else, I knew that netdata supports by default and without configuration the monitoring of Docker applications. If netdata was started before Docker was installed, only a restart is needed: sudo service netdata restart.

Now I want to install my RSS reader, miniflux. It’s an amazing tool that I now host on my DigitalOcean droplet. I want to dump its DB and start a new instance of it in my Raspberry with the current content.

I first tried to dump the DB using a tool with a UI since I had some problems to ssh my droplet. I tried pgadmin and pgweb but both attempts failed.

I then used this strategy, successfully, to get a dump of the DB to my laptop:

  • start a shell in the postgres container: docker exec -it DOCKER_ID bash
  • run the dump command: pg_dump -U miniflux -d miniflux --column-inserts > 2023-01-25.sql
  • moved the file into the folder that is attached to the volume: mv 2023-01-25.sql /var/lib/postgres/data
  • back to the host (droplet) and cd into the folder where docker stores volumes: cd /var/lib/docker/volumes
  • ls to see all the folder and cd into the one related to the right container
  • since I could not ssh from my laptop (I was doing everything from the console in the browser) I could not copy my file over and I had to use a 3 party service. I simply uploaded the sql file to file.io: curl -F [email protected] https://file.io
  • previous curl returns a link that I could use from my laptop to download the dump. After the download the file is deleted from the server (I hope so :)

Now that I have a working dump on my laptop, I need to restore it in the Raspberry:

  • copy the dump into the Raspberry: scp 2023-01-24.sql [email protected]:/home/alcaprar
  • start a new postgres docker DB with a persistent volume
  • move the file into the folder used as a persistent volume
  • start a shell in the postgres container
  • create a new database: psql -U postgres. Then create database miniflux; and exit \q
  • restore the DB psql -U postgres -d miniflux < 2023-01-25.sql

By now the DB should be restored and it’s only a matter of starting the miniflux too. This is the current docker-compose:

version: '3.4'
services:
  miniflux:
    image: miniflux/miniflux:2.0.36
    ports:
      - "8080:8080"
    depends_on:
      - db
    environment:
      - DATABASE_URL=postgres://postgres:postgres@db/miniflux?sslmode=disable
  db:
    image: postgres:15
    environment:
      - POSTGRES_PASSWORD=postgres
    volumes:
      - ./data/postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
      interval: 10s
      start_period: 30s

After a docker-compose up -d miniflux should be accessible at port 8080 with all the previous content!.

At this point I want to access it from the outside and use Cloudflare tunnels like I did for netdata.

To add a new service to Cloudflare tunnels:

  • stop cloudflared service: sudo service cloudflared stop
  • uninstall the service: sudo cloudflared service uninstall
  • remove the config from /etc/cloudflared/config.yml: sudo rm /etc/cloudflared/config.yml
  • add the new service to the .cloudflared/config.yml
  • add the new dns route: cloudflared tunnel route dns pi4 rss.caprar.xyz
  • reinstall the service: sudo cloudflared --config ~/.cloudflared/config.yml service install