8.2 KiB
py-push-server
Docker Build & Deploy
export PUSH_SERVER_VERSION=0.1
# This command is also in build.sh
# Note that there is a user with the uid 1000 inside the Dockerfile: make sure that such a user exists on the server and has write access to the database.
docker build --tag py-push-server:amd-$PUSH_SERVER_VERSION --platform linux/amd64 .
docker save -o ~/dl/py-push-server-amd-$PUSH_SERVER_VERSION.tar py-push-server:amd-$PUSH_SERVER_VERSION
bzip2 ~/dl/py-push-server-amd-$PUSH_SERVER_VERSION.tar
... and then on the server after transferring that file:
export PUSH_SERVER_VERSION=0.1
bzip2 -d py-push-server-amd-$PUSH_SERVER_VERSION.tar.bz2
sudo docker load -i py-push-server-amd-$PUSH_SERVER_VERSION.tar
# start with an empty DB
cp data/webpush.db.empty py-push-server-db/webpush.db
# here's how to run it -- but if you're on production then look below about ADMIN_PASSWORD
sudo docker run -d -p 8900:3000 -v ~/py-push-server-db:/app/instance/data --name py-push-server-$PUSH_SERVER_VERSION py-push-server:amd-$PUSH_SERVER_VERSION
On a production server for security (eg /web-push/generate_vapid): set an environment variable ADMIN_PASSWORD
for permissions; one way is to add this to the docker run
command: -e ADMIN_PASSWORD=<anything secure>
Finally, after it's started, generate a new VAPID by hitting the regenerate-vapid
endpoint with a POST, eg. curl -X POST localhost:8080/web-push/regenerate-vapid
Docker Compose & HAProxy Setup
On a production server for security (eg /web-push/generate_vapid): set an environment variable ADMIN_PASSWORD
for permissions; one way is to create a .env file with the value inside before running docker compose
commands:
ADMIN_PASSWORD=<anything secure>
On first run you need to:
docker network create phoenix-network
HAProxy setup
... in docker-compose.yml ...
default-backend:
container_name: 'default-backend'
image: nginx:1.22.0-alpine
volumes:
- /docker-volumes/haproxy-config/core/nginx/html:/usr/share/nginx/html
restart: always
networks:
- phoenix-network
rsyslog:
container_name: 'rsyslog'
hostname: 'rsyslog'
image: alpine-rsyslog
build:
context: ./alpine-rsyslog
volumes:
- $PWD/haproxy-config/core/haproxy.conf:/etc/rsyslog.d/haproxy.conf
- $PWD/haproxy-config/log:/var/log
- $PWD/haproxy-config/spool:/var/spool
- $PWD/rsyslog/rsyslog.conf:/etc/rsyslog.conf
ports:
- '127.0.0.1:514:514'
networks:
- phoenix-network
haproxy:
container_name: 'haproxy'
hostname: 'haproxy'
image: haproxytech/haproxy-alpine:latest
ports:
- 443:443
- 80:80
depends_on:
- default-backend
- rsyslog
volumes:
- $PWD/haproxy-config/log:/var/log
- $PWD/haproxy-config/certs:/usr/local/etc/haproxy/certs:ro
- $PWD/haproxy-config/core:/usr/local/etc/haproxy/core:ro
- $PWD/haproxy-config/maps:/usr/local/etc/haproxy/maps:ro
- $PWD/haproxy-config/sites:/usr/local/etc/haproxy/sites:ro
command: "haproxy -f /usr/local/etc/haproxy/core/haproxy.cfg -f /usr/local/etc/haproxy/sites/"
networks:
- phoenix-network
in haproxy-config/core/haproxy.conf
:
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
local0.* /var/log/haproxy.log
& ~
in haproxy-config/core/haproxy.cfg
:
global
tune.ssl.default-dh-param 2048
log rsyslog:514 local0
maxconn 4096
defaults
option httplog
option forwardfor except 127.0.0.1
option forwardfor header X-Real-IP
option http-no-delay
log global
mode http
retries 10
option redispatch
timeout connect 4000
timeout client 600000
timeout server 600000
timeout queue 10s
frontend default_frontend
mode http
bind *:80
bind *:443 ssl crt /usr/local/etc/haproxy/certs alpn h2,http/1.1
# redirect non-www
# http-request redirect prefix https://%[hdr(Host),regsub(^www\.,,i)] code 301 if { hdr_beg(host) -i www. }
# Make a rule that the server cannot be directly accessed by IP address
acl has_domain hdr(Host),map_str(/usr/local/etc/haproxy/maps/domains.map) -m found
http-request deny if !has_domain
# ACME challenge rule?
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
redirect code 301 scheme https if !{ ssl_fc && letsencrypt-acl }
compression algo gzip
compression type text/css text/html text/javascript application/javascript text/plain text/xml application/json image/svg+xml
acl is_content_type_html res.hdr(Content-Type) -i text/html
http-response set-header Content-Type text/html;\ charset=UTF-8 if is_content_type_html
http-response set-header Cache-Control no-cache,\ max-age=31536000
http-response set-header Expires %[date(3600),http_date]
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
http-response set-header X-XSS-Protection "1; mode=block"
http-response set-header X-Content-Type-Options "nosniff"
http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
use_backend %[base,lower,regsub(^www\.,,i),map_beg(/usr/local/etc/haproxy/maps/sites.map,default_backend)]
listen stats
bind *:9999
mode http
log global
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats hide-version
backend haproxy_stats_backend
http-request auth realm haproxy-stats unless { http_auth_group(basic-auth-list) is-haproxy-stats }
mode http
compression algo gzip
compression offload
server server_nginx localhost:9999
userlist basic-auth-list
group is-guest
group is-haproxy-stats
user guest password $5$N7CpS0mo$FyJtlwQOwzAi5HnCISumyBKWyPu6DhBO7eGzUUyWoJ7 groups is-guest
... in haproxy-config/sites/web-push.anomalistlabs.com.cfg
...
define an HAProxy backend and map it to the Docker host and port NOTE: this also turned off CORS origin rule
backend web_push_backend
mode http
compression algo gzip
compression offload
http-response set-header Access-Control-Allow-Origin "*"
server server_nginx endorser-push-server:3000
... in haproxy-config/maps/domains.map
...
add a domain that will be used as a base
timesafari.anomalistlabs.com
... in haproxy-config/maps/sites.map
...
map the /web-push
path to the web_push_backend
NOTE: timesafari-pwa.anomalistlabs.com
PWA sits on the root
timesafari-pwa.anomalistlabs.com/web-push/ web_push_backend
The rest ..
docker-compose up -d
should just work :-)
Run the server outside Docker
Run the app:
sh <(curl https://pkgx.sh) +python.org +virtualenv.pypa.io sh
# first time
python -m venv .
source bin/activate
# first time
pip install -r requirements.txt
cp data/webpush.db.empty data/webpush.db
# For DB access, you'll have to uncomment the local path for `db_uri`.
# 3 workers would trigger 3 daily subscription runs
gunicorn -b 0.0.0.0:3000 --log-level=debug --workers=1 app:app
... and see the results in a browser: http://localhost:3000/web-push/vapid
See Troubleshooting below if that doesn't work out of the box.
Run a test:
python webpush.py
Run haproxy (on a Mac):
-
Create "haproxy-config" directory for those files above, eg. /usr/local/etc/haproxy
-
Comment out the
log rsyslog
andbind *:443
lines in /usr/local/etc/haproxy/haproxy.cfg and then run:
haproxy -f /usr/local/etc/haproxy/haproxy.cfg
Troubleshooting
-
If you get "no such table: vapid_key" then your file pointers are probably wrong. Check that the "docker run" mounted volume matches the SQLALCHEMY_DATABASE_URI in the app.py file.
-
If you get "unable to open database file", you can provide the app.py with SQLALCHEMY_DATABASE_URI with
sqlite:////...
with the full path to the data/webpush.db file. (Why does the relative path ofsqlite:///...
not work for a relative path?) -
Another potential problem with "unable to open database file" is the permissions on the directory or file with the DB, as set on the local volume that matches the docker
/app/instance/data
directory. Note that the user id in the Dockerfile is set to 1000; theid
command will tell you the uid for your user, which should match. Then, user with uid 1000 should have write permissions to both the file and the directory, and have execute permissions to the whole path to that directory.