How to make web images better

I took participation in survey from Chrome team about images

And the last question was "Anything else you’d like to tell us about images?". I tried to think and I realized that I have a so many things to tell so after some point I decided just to create this post and leave a link there 🙂

First of all, I’m a pure back-end developer so I don’t know a lot of things about web but even from my small experience I had a lot of confusion and thoughts. Maybe some of them are stupid but a lot of other newcomers can have the similar opinions.

Almost nobody using built-in browser compression

I prefer it use SVG vector images when possible and often it’s just more effective to use SVG file and then compress it with GZip or Brotli but then we need to add Content-Encoding header which is not always possible to configure especially if you just creating a bundle that will be deployed by someone else.

For example on in almost all legacy projects that I worked for as a Java Developer we just created a *.war file i.e. a zip archive with both assets and compiled code and then it was deployed to some really old Tomcat webserver by sysadmins. So even if pre-compress all assets with Brotli (BTW which is not so easy to do with Maven because as far I remember the Brotli wasn’t yet ported to Java) so then we have to talk with sysadmins to enable serving precompressed files in Tomcat and then it turned out that is not so easy to do.

Especially this is a problem for limited web servers that works on embedded devices like OpenWrt and they doesn’t supports compressed header but very limited in space. Here for example was some discussion about how to make favicon for OpenWrt’s Luci admin panel https://github.com/openwrt/luci/issues/2251

So it would be nice if browsers can support just files with a double *.svg.br extension and decompress them and show as SVG even if the Content-Encoding header wasn’t set. And then we can just specify <img src="/image.svg.br"/> in our code like a regular image. As far i remember some browsers already supports *.svggz extension but not *.svg.gz so we need to rename those files after gzipping. Also it would be really nice to add support of ZStd compression which sometimes is better than Brotli on decompression or even LZ4 which can be useful for lightweight on-the-fly compression.

If you want ask browser to download the file instead of showing it you can just use the Content-Disposition header. This is so easy to do and is almost safe change that it worth to considering.

Basically the same simple mechanism of auto-decompression by a file extension can be applied to all assets like JS, CSS and even on the html itself. Even bitmap images can use this e.g. in general the PNG formats in fact just gzipped BMP file. So the whole PNG format could be simplified to *.bmp.gz files. The same is for woff font files which in fact just gzipped ttf files. But what will happen it you want to use Brotli instead of deflate inside of the woff files? Does it supports different compressors or should we create a new file extension like woff3? I guess there is a bunch of other formats that are just compressed containers for some data. But there is possible hundreds of compressors especially for media files. The double extensions can be a simple solution.

Here it comes another interesting problem: if you have a set of images e.g. country flags that shares a lot of common patterns so it may be built the common compression dictionary then it will be more effective to make a tar file first and only then compress it. This can reduce the resulted file size. But here comes another problem: how to use in the <img> attribute the file inside of the tar file? We can do this easily in Java just use ! sign in URL like: my.jar!/assets/icon.png. So why not to use the similar approach in browsers?

Dark images

Sometimes I using an extension that makes all websites looking in dark theme but images still are very bright and icons looks too colorful.

website rendered in night mode but the photo is still mostly white

As far I know recently Safari added some feature to detect dark mode of the browser but it would be great to add the same functionality as srcset to images. I mean that we can specify image for the usual mode and separately for the dark mode (or even high contrast for blind people). As far I know the dark mode detection is made as CSS media query so here is another thing: it would be nice to change image source just by CSS.

For example if you have themes for website and you can simply upload one css file or another but for images you can’t just change their src and you should use <div> with background image instead but then you have to specify image dimension for the div because it won’t be adjusted automatically on image loading.

Speaking more generally this is very unclear what is a "style" that can be changed via CSS and what is an attribute of the tag so this is very controversial point but a real pain for developers. For example a lot of devs even using the FontAwesome to declarative use logo images just by a specific some CSS class.

IMHO the correct solution is just make kind of JSON attributes files that are applied to the whole DOM tree, not only to style attribute as CSS makes.

Almost all images on a typical page are just icons. The same icons

And speaking about FontAwesome: for a lot of websites, especially some internal pages or admin panels, we just need some basic set of icons: edit, save, delete, search, setting etc. In fact the basic set of icons is less than a hundred. Why not just add these icon to a browser and then allow to webmasters somehow use them? Yes, here is a big problem is how the icons are actually looks: should they be colorful or just be monochromatic like in FontAwesome or should the icons be the same as in your desktop applications? For example browsers already supports System Colors i.e. you can make some element with the same color as a system button: https://developer.mozilla.org/en-US/docs/Web/CSS/color_value#System_Colors

This is not widely used (I remember only one-two real life usage of the feature) but the main idea here is that browser can show some element more native to your system. But this is not needed in practice because you anyway have some design for website. This only make sense if the user really wants to use some radically different color scheme like dark mode or high contrast.

Personally I don’t think that some browser can create ideal set of icons and especially ask another browsers to support this. But here can be a simple solution, the same as with fonts: all browsers already have the same basic set of fonts (Arial, Helvetica etc.) and if some font is not exists in system then browser can download the font file from the url specified by webmaster.

Here can be some similar solution: the browser already have images for some basic icons set and the same icons are accessible from CDN. But to
Developers can see hash of the images, for example sha384 of edit icon is sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC. Then a developer can add just a regular image tag on the page with src to CDN and specified SRI hash:

<img src="https://some-cdn.com/edit.png&quot; integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"/>

Browser can quickly lookup locally the image by the hash from SRI and don’t perform that call to download image from the CDN but just resolve it from local computer. So it kind of already precached images and the cache is shared between all websites. In fact there is few extensions for browsers that imitated "Local CDN" like

Actually while I wrote this I figured out that it will be enough just to create a good set of well designed icons on a CDN and promoted to developers as something that is good to use by default because those icons most probably was already cached in user’s browser. The only thing here is to allow to have cached files between different domains. I.e. if I once visited a site1.com that uses jquery-2.1.min.js file then when I visit site2.com that wan’t the same file with the same SRI hash it will be resolved locally from site1’s cache.

It’s important to make the lookup by the hash because the same jquery.js file can be delivered from different CDNs but in fact this is the same file.

Here can be a security hole if a hacker can find a hash collision with jquery-2.1.min.js then it can create a fake JS file which will be used instead of jquery-2.1.min.js

Another solution can be using of Emoji as icons. For example Save 💾 Search 🔍 Edit 🖊️ but I can’t find out a proper Emoji for remove or add button. I don’t think that creators of Emojis was targeted to replace common computer icons. Here should be checked how rich are Emojis but actually a lot of developers just never thought about that this can be a good replacement for icons. At least from my experience.

We need some built-in browser image editor

At least simplest with crop and resize. Almost all images uploaded by users can be easily shrinked in size and converted to better formats but they just don’t care about image size. Why WebP is not used so much in internet? Because when I uploading the screenshot above to the post I just don’t wanted to spend some time and convert it from PNG to WebP or whatever this website supports. I can’t even don’t know about that possibility to minimize an image. But what if we can specify to a browser formats that website prefer and image size? For example

<input name="avatar" type="file" 
type="images" 
preferred-formats="image/webp;auto-convert;max-width=320;auto-zoom;"
max-size="15kb"/>

Or something like that. I.e. we can say: "Hey Browser, here is upload file for user’s avatar, we want any image files but we prefer WebP and if you can just convert it from JPEG to WebP yourself (auto-convert), also the image must be not wider than 320 pixels (max-width=320) but if user doesn’t want to resize the image then don’t decline it and just make this image smaller yourself (auto-zoom), by the way image file size is also limited to 15Kb.

What is important here is that after these lose compression we can show resulted image to a user so it can see the resulted quality even before uploading.

With this simple directive we can significantly reduce network load in the beginning. Yes, anyway, websites can have some image converters like ImageMagick on the server side but why to load network and servers if we can do this on browser’s side?

We, developers, are lazy to convert images

If you wan’t to make image optimization more popular just add WebP converter to Chrome Dev Tools. That’s it. It will easily increase popularity of the format in several times.

QR codes

This is something not really related but also quite important as for me. Recently I had a problem when I wanted to show a QR code with bitcoin address on my website. The generated QR code image was very big and wasn’t able to make it lower (we can’t use lose compression) so for me it was easier to add a JS library that generates a <canvas> with QR code. Anyway that JS library is quite big but what is more interesting is why QR codes are not supported by JS API or browsers?

Almost all browsers already have built-in QR support but it just not exposed via API so websites can’t use it. For example imagine that you have a "Share QR" button by clicking on it browser can show a big QR code so it will be easy to scan from a phone.

How many websites uses QR codes? Well, I guess a quite a lot.

Реклама

Setup DDNS/DynDNS in OpenWrt

I serving my small homepage stokito.com directly from my router with OpenWrt Linux — thus I don’t have to pay for any hosting because the router is anyway always online. My provider gives to my router some public IP which is changes sometimes: maybe about once per week. That is fine for me but I have to change it manually. Of course I can buy a static public IP from my internet provider but my goal is to have cheapest as possible website. So I need to automatically and periodically update the DNS A record with my current IP.

To solve this problem people uses Dynamic DNS (DDNS) which is de facto some pseudo protocol when router itself constantly registers it’s current IP on the DNS server. Most routers already have support of some DDNS providers where most popular are DynDNS.com and NO-IP.com or even manufactures like ASUS may have their own DDNS. Gamers and owners of IP cameras very often using this.

Unfortunately my DNS registrar doesn’t support DDNS protocol so I have to use some another. A good news is that OpenWrt already have a package ddns-scripts witch supports a lot of servers. I checked almost all DDNS providers that are supported my and choose DuckDNS.org.

DynDNS.com looks like one of the first DDNS providers and some other even tries to implement it’s API. But it’s paid and that’s not acceptable for me to because with the same money I just can buy a static IP. The NO-IP.com have some strange API problems with refreshing IP so there is even a separate script for OpenWrt ddns-scripts_no-ip_com. In the same time DuckDNS looks like was made by programmers for programmers. It allows to quickly register with Google account then they give you a generated random token instead of password and they have a good documentation.

So the API is so simple that I even was wondered why it was created the ddns-scripts package. In fact, all what you need to do is to register on DuckDNS and receive your token (i.e. password) then login to your OpenWrt LUCI admin panel, then open System / Scheduled Tasks and add the following line:

* */4 * * * wget -4 -q -O /dev/null http://www.duckdns.org/update/{YOURDOMAIN}/{YOURTOKEN}

i.e. each 4 hours you will send a HTTP get request to DuckDNS.

Then you can check logs of cron task in syslogs: System / System Logs. For example for my domain stokito.duckdns.org:

Mon Apr 22 18:52:00 2019 cron.info crond[12903]: USER root pid 14005 cmd wget -4 -q -O /dev/null http://www.duckdns.org/update/stokito/6c5se9d3-5220-440-b46-6873f9a

But for some reason this setup via Luci doesn’t worked for me so better to do the same with command line. Login and edit crontab:

ssh root@192.168.1.1
root@OpenWrt:~# echo "42 */4 * * * /etc/update_ddns.sh" >> /etc/crontabs/root

or you can edit:

root@OpenWrt:~# crontab -e

The crontab -e opens vi editor for /etc/crontabs/root. Also note that I enabled the cron service just to be sure. See OpenWrt cron documentation for details.

Now put there a line like this:

42 */4 * * * /etc/update_ddns.sh

Note here that I added some random minute 42 to keep DuckDNS from requests waves if all users tries to update their DNS once in a hour. So please take some another minute too.

Then add this script:

wget -4 -q -O /dev/null http://www.duckdns.org/update/{YOURDOMAIN}/{YOURTOKEN}

to /etc/update_ddns.sh and chmod +x it.

Now you need to enable and restart cron service:

root@OpenWrt:~# /etc/init.d/cron enable
root@OpenWrt:~# /etc/init.d/cron restart
root@OpenWrt:~# logread | grep cron

The last command is useful to see cron logs. You may want to increase cronloglevel in /etc/config/system. If everything worked then in duckdns dashboard you’ll IP will be updated. See "Last time changed" field.

Then your router will be accessible with the new domain. For example for my domain

$ dig stokito.duckdns.org

; DiG stokito.duckdns.org
;; global options: +cmd
;; Got answer:
;; HEADER   HEADER- opcode: QUERY, status: NOERROR, id: 41868
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;baba.stokito.com.		IN	A

;; ANSWER SECTION:
stokito.duckdns.org.	60	IN	A	77.122.151.58

;; Query time: 212 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Mon Apr 22 23:55:45 EEST 2019
;; MSG SIZE  rcvd: 129

Here you can see that DNS server 176.103.130.131 (BTW that’s AdGuard) responded that IP of the domain stokito.duckdns.org. is 77.122.151.58 i.e. my public IP.

Use regular domain as alias for DDNS

I already have a domain stokito.com and I would like to use it instead of the DDNS stokito.duckdns.org. DNS supports this and what I need to do is to add to my domain stokito.com a new record CNAME with the DDNS stokito.duckdns.org. But DNS spec allows this only for subdomains. I.e. I can map blog.stokito.com to the stokito.duckdns.org but I can’t do that for for root domain stokito.com. Not sure why but most domain registrants follow the rule. I added a subdomain record router.stokito.com and mapped via CNAME to stokito.duckdns.org and here is how it resolved now:

$ dig router.stokito.com
; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu router.stokito.com
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 19506
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;router.stokito.com.		IN	A

;; ANSWER SECTION:
router.stokito.com.	60	IN	CNAME	stokito.duckdns.org.
stokito.duckdns.org.	60	IN	A	77.122.151.58

;; Query time: 223 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Sun May 05 14:22:29 EEST 2019
;; MSG SIZE  rcvd: 133

You can see that router.stokito.com was firstly resolved to CNAME stokito.duckdns.org. which then was resolved to my router’s IP 77.122.189.58. This have a downside that now your router’s IP is visible to anyone who would like to hack you.

Fortunately I using CloudFlare which works like a proxy that protects my site from DDoS. It’s free plan allows almost everything that I need. But what is important that I can transfer my domain to CF nameserver and CF allows to map CNAME to root domain stokito.com. So in CF DNS Settings I set the CNAME and now when I try to open stokito.com then it opened my website from the router. In fact, they don’t do a real alias and stokito.com domain refers to CF IP address but internally they proxy HTTP requests to stokito.duckdns.org.

CludFlare DNS settings screenshot
CludFlare DNS settings screenshot

So I configured these domains:

  1. router.stokito.com is a CNAME to stokito.duckdns.org and please note that the cloud icon is gray which means that CF will not proxy this domain and it will work only as DNS. Thus the router.stokito.com will be always resolved to my router’s IP via DDNS stokito.duckdns.org as you already saw before in dig command output.
  2. Wildcard * domain i.e. any other subdomain will be also resolved to my router’s IP. In fact you don’t need this, I just wanted to show that you have such possibility.
  3. The root domain stokito.com and its subdomain www will be proxied (i.e. orange cloud icon) to stokito.duckdns.org. The real IP of my router is hidden in this case and it’s protected form DDoS by CF.

Now you can check that root domain stokito.com is resolved to CF proxy:

$ dig stokito.com

; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu stokito.com
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 35463
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stokito.com.			IN	A

;; ANSWER SECTION:
stokito.com.		3600	IN	A	104.28.4.8
stokito.com.		3600	IN	A	104.28.5.8

;; Query time: 51 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Sun May 05 14:05:34 EEST 2019
;; MSG SIZE  rcvd: 94

The IP addresses 104.28.4.8 and 104.28.5.8 belongs to CloudFlare.

Configure uhttpd webserver to work with the dynamic domain

In fact, you can just use the DDNS directly in /etc/config/uhttpd instead of IP address i.e.:

config uhttpd homepage
  option realm homepage
  list listen_http 'stokito.duckdns.org:80'
  option home '/tmp/www/stokito.com'
  option rfc1918_filter '0'

Here I configured my homepage on 80 port but instead of my external IP address 77.122.189.58 I just used my DDNS stokito.duckdns.org. It’s important that while my domain is stokito.com it refers to CloudFlare so I can’t use it and I have to use the DDNS.

When eth1 (i.e. wan) network interface is restarted it may receive a new IP. So we have to update our DDNS. We can add a hook on iface up and send the update. So we should trigger the same command that we put into cron. To do so you need to add a hook to /etc/hotplug.d/iface/96-update-ddns.sh:

#!/bin/sh
case "$ACTION" in
ifup)
/etc/update_ddns.sh
;;
esac

I set it’s prio to 97 to run it after 95-ddns script if you decided to use it instead of self made cron script. Just to avoid conflicts.

To restart uhttpd after external IP was changed you can add the hotplug script:

#!/bin/sh
case "$ACTION" in
ifup)
/etc/init.d/uhttpd enabled && sleep 30 && /etc/init.d/uhttpd restart
;;
esac

And put it to /etc/hotplug.d/iface/97-homepage.sh. We set 30 seconds delay to be sure that dns record was updated.

Now let’s try:

# ifconfig eth1 down
# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:2488344 errors:0 dropped:499 overruns:0 frame:0
          TX packets:818007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068100023 (2.8 GiB)  TX bytes:84736706 (80.8 MiB)
          Interrupt:4 
# ifconfig eth1 up
# ifconfig eht1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          inet addr:77.122.151.58  Bcast:77.122.151.255  Mask:255.255.255.0
          inet6 addr: fe80::2c5:f4ff:fe71:1b9a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2487401 errors:0 dropped:499 overruns:0 frame:0
          TX packets:817808 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068008637 (2.8 GiB)  TX bytes:84672103 (80.7 MiB)
          Interrupt:4 

# ps | grep uhttpd
 3007 root      1296 S    /usr/sbin/uhttpd -f -h /www -r main -x /cgi-bin -p 192.168.1.1
 3008 root      1296 S    /usr/sbin/uhttpd -f -h /tmp/www/stokito.com -r homepage1 -p stokito.duckdns.org:80
 3018 root      1200 S    grep uhttpd

Explanation:

  1. Stop eth1 interface. At this moment internet goes down.
  2. See eth1 details to be sure that there is no external IP.
  3. Start eth1 with ifconfig eth1 up and see it’s details that IP now obtained.
  4. Check that utthpd process is restarted after 5 seconds. To ensure that it was restarted you can change site name or realm in /etc/confg/uhttpd and then see that the name was changed after restart. Here for example you might note that I changed homepage realm name to homepage1.

In fact, we don’t have to restart the uhttpd if IP wasn’t changed. Also if we detected IP change then we can start uhttpd with the new IP. For example we can update it with uci. It’s not so easy to get ip from interface name but you can see getLocalIp function from ddns scripts dynamic_dns_functions.sh.

But this solution is much simpler so I decided to keep it.

Protect router from hackers: allow access to HTTP server only to CloudFlare proxy IPs

Since my website is accessible only from CloudFlare so I need to allow CF IPs but deny any others. I denied access to 80 port from /etc/config/firewall file but to allow CF IPs you need to add this script to /etc/firewall.user:

for ip in `wget -qO- http://www.cloudflare.com/ips-v4`; do
  iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT
done

The script will fetch a list of CF IPs and allow them via iptables.

How do I set up Google Talk/Hangout in Pidgin?

Google Talk still works with XMPP so you can use Pidgin and there is an official tutorial Configure Pidgin to connect to Google Talk

But there is missing important part: you’ll get «Not Authorized» error during connect.
The problem is that Google tries to improve security so your Google/Gmail password ideally should never be entered anywere except of Google itself so no any trojan or virus can’t stole your password.
So how can you login into Google Talk without inputting a password from Google Account?
Well, you can generate another password.
Go to My Account, and in the «Sign-in & Security» column, go to Signing in to Google and then App passwords

You’ll see the «Select the app and device you want to generate the app password for.»
Select App: Mail
Select Device: Other
Then input «pidgin»
Press Generate button

Then you’ll see «Your app password for your device»

 

Copy the generated password and input it into Pidgin account and that’s it.

Nginx Plus Docker Image + lua-resty-openidc for OAuth termination

I need a Docker image with Nginx Plus and configured lua-resty-openidc to use Keycloak OAuth provider.
I made it based on this article Deploying NGINX and NGINX Plus with Docker but there was few additional non trivial steps so here is my result.
Create a folder with nginx plus repo keys (nginx-repo.crt and nginx-repo.key)
Then create a Dockerfile with the following content:

FROM ubuntu:artful

# Download certificate and key from the customer portal (https://cs.nginx.com)
# and copy to the build context
COPY nginx-repo.crt /etc/ssl/nginx/
COPY nginx-repo.key /etc/ssl/nginx/

# Install NGINX Plus
RUN set -x \
  && apt-get update && apt-get upgrade -y \
  && apt-get install --no-install-recommends --no-install-suggests -y apt-transport-https ca-certificates \
  && apt-get install -y lsb-release wget \
  && wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key \
  && wget -q -O /etc/apt/apt.conf.d/90nginx https://cs.nginx.com/static/files/90nginx \
  && printf "deb https://plus-pkgs.nginx.com/ubuntu `lsb_release -cs` nginx-plus\n" | tee /etc/apt/sources.list.d/nginx-plus.list \
  && apt-get update && apt-get install -y nginx-plus nginx-plus-module-lua nginx-plus-module-ndk luarocks libssl1.0-dev git

RUN set -x \
  && apt-get remove --purge --auto-remove -y \
  && rm -rf /var/lib/apt/lists/*

# Forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
  && ln -sf /dev/stderr /var/log/nginx/error.log

RUN luarocks install lua-resty-openidc
RUN luarocks install lua-cjson
RUN luarocks install lua-resty-string
RUN luarocks install lua-resty-http
RUN luarocks install lua-resty-session
RUN luarocks install lua-resty-jwt

EXPOSE 80

STOPSIGNAL SIGTERM

CMD ["nginx", "-g", "daemon off;"]

Here you can see that we installing not only nginx-plus but also nginx-plus-module-lua and nginx-plus-module-ndk modules which are needed to run lua-resty-openidc.
Since open lua-resty-openidc is distributed via luarocks package manager we need to install it too and then install all needed packages via luarocks. For lua-crypto dependency you need to install libssl1.0-dev package with OpenSSL headers and for some other package we needed git, don't ask me why, I have no idea.
FYI: openidc is installed into file /usr/local/share/lua/5.1/resty/openidc.lua

Then you need to build an image with

docker build --no-cache -t nginxplus .

If you have a Docker Registry inside your company you can publish the image there:

docker tag nginxplus your.docker.registry:5000/nginxplus
docker push your.docker.registry:5000/nginxplus

Not you have an image and you can run it. All you need is to mount you server config into /etc/nginx folder. Consider you have a docker-compose.yml file with the following content:

version: '3'
services:
  gw-nginx:
    image: your.docker.registry:5000/nginxplus
    container_name: gw-nginx
    volumes:
      - ~/gateway/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ~/gateway/etc/nginx/conf.d/:/etc/nginx/conf.d/
      - /etc/localtime:/etc/localtime
    ports:
      - 80:80
  gw-keycloak:
    image: jboss/keycloak
    container_name: gw-keycloak
    volumes:
      - /etc/localtime:/etc/localtime
    ports:
      - 8080:8080
    environment:
      - KEYCLOAK_USER=root
      - KEYCLOAK_PASSWORD=changeMePlease
      - PROXY_ADDRESS_FORWARDING=true

Now create a gateway folder:
mkdir ~/gateway
cd ~/gateway
And plase nginx.conf file into ~/gateway/etc/nginx/nginx.conf. The most important is to place this two lines:

...
http {
  resolver yourdnsip;
  lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
  lua_ssl_verify_depth 5;
  ...
}

For some reason without this lines Lua scripts can’t resolve upstreams.

Create ~/gateway/etc/nginx/conf.d/default.conf file and configure it as descibed in restly-openidc documentation.
Finally you can run it with docker-compose up -d command.

[Linkset] Authorization termination: OAuth reverse proxy

UPDATE Today was released Nginx Plus with a new nginx-openid-connect module.

If you are looking for Authentication Server or OAuth library then OpenID Conect implementations page is a good place to start. I chose Keycloak but also want to look on FreeIPA or https://ipsilon-project.org

Keycloak Security Proxy but I want proxy as Nginx module and I need to implement something non standard.
Also there is some an OpenID/Keycloak Proxy service https://github.com/gambol99/keycloak-proxy

For Apache web server everything is clear:
https://github.com/zmartzone/mod_auth_openidc

But I need something for Nginx .

lua-resty-openidc and it’s intro blog post written in Lua so its easier to extend (so I did for implementing of custom grant flow+ sessions + reference tokens).

https://github.com/tarachandverma/nginx-openidc written fully in C++ and this is interesting because you don’t need to enable Lua on Nginx (believe me, this can be harmful).

What is also interesting is tha module for only one purpose: to use reference tokens (opaque tokens)
https://github.com/curityio/nginx_phantom_token_module it’s written in C so no needs for additional deps.

Authentication Based on Subrequest Result

Actually Nginx already has something that can satisfy 80% of your needs:

https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/

https://www.nginx.com/blog/nginx-plus-authenticate-users/
http://nginx.org/en/docs/http/ngx_http_auth_request_module.html

https://github.com/nginxinc/nginx-ldap-auth
https://redbyte.eu/en/blog/using-the-nginx-auth-request-module/
https://news.ycombinator.com/item?id=7641148

But to use the sebrequest auth your auth server should support this or you need to setup another shim proxy:
https://github.com/bitly/oauth2_proxy
and here is docker container which integrates it https://github.com/sinnerschrader/oauth-proxy-nginx

Or you can use this one which is written in Lua
https://github.com/jirutka/ngx-oauth

Bonus:
Important article from Security researcher Egor Homakov who hacked several times GitHub and Facebook:

OAuth1, OAuth2, OAuth…?

[Linkset] Payment System, Payment Gateway and Credit Card Icons

https://github.com/muffinresearch/payment-icons SVG logos in flat, mono,outline and usual variants. Has Verve credit card logo

https://github.com/gilbarbara/logos most bigest collection of logos of popular brands, not only payments systems. SVG. See their demo site https://svgporn.com/

https://github.com/simple-icons/simple-icons collection which has icons for some cryptocurrencies. SVG

https://github.com/orlandotm/payment-webfont payment system logos as font. WOFF format

https://github.com/hiqdev/payment-icons list of payment logos in different dimensions but PNG only

https://github.com/diogomachado/payments-svg SVG and has CSS sprite

https://github.com/gregoiresgt/payment-icons SVG PNG with many variants

https://github.com/goodybag/credit-card-logos
Flexible SVG credit card logo assets and CSS

https://github.com/codefoundries/material-ui-credit-card-icons Library with credit card icons for Material-UI

https://github.com/slaterjohn/payment-logos payment gateway and credit card logo icons. Available in 4 sizes. Has Bitcoin logo and Contactless Payment Logo

How to expose locally service to Internet