Category: How To

Setup DDNS/DynDNS in OpenWrt

I serving my small homepage directly from my router with OpenWrt Linux — thus I don’t have to pay for any hosting because the router is anyway always online. My provider gives to my router some public IP which is changes sometimes: maybe about once per week. That is fine for me but I have to change it manually. Of course I can buy a static public IP from my internet provider but my goal is to have cheapest as possible website. So I need to automatically and periodically update the DNS A record with my current IP.

To solve this problem people uses Dynamic DNS (DDNS) which is de facto some pseudo protocol when router itself constantly registers it’s current IP on the DNS server. Most routers already have support of some DDNS providers where most popular are and or even manufactures like ASUS may have their own DDNS. Gamers and owners of IP cameras very often using this.

Unfortunately my DNS registrar doesn’t support DDNS protocol so I have to use some another. A good news is that OpenWrt already have a package ddns-scripts witch supports a lot of servers. I checked almost all DDNS providers that are supported my and choose looks like one of the first DDNS providers and some other even tries to implement it’s API. But it’s paid and that’s not acceptable for me to because with the same money I just can buy a static IP. The have some strange API problems with refreshing IP so there is even a separate script for OpenWrt ddns-scripts_no-ip_com. In the same time DuckDNS looks like was made by programmers for programmers. It allows to quickly register with Google account then they give you a generated random token instead of password and they have a good documentation.

So the API is so simple that I even was wondered why it was created the ddns-scripts package. In fact, all what you need to do is to register on DuckDNS and receive your token (i.e. password) then login to your OpenWrt LUCI admin panel, then open System / Scheduled Tasks and add the following line:

* */4 * * * wget -4 -q -O /dev/null{YOURDOMAIN}/{YOURTOKEN}

i.e. each 4 hours you will send a HTTP get request to DuckDNS.

Then you can check logs of cron task in syslogs: System / System Logs. For example for my domain

Mon Apr 22 18:52:00 2019 crond[12903]: USER root pid 14005 cmd wget -4 -q -O /dev/null

But for some reason this setup via Luci doesn’t worked for me so better to do the same with command line. Login and edit crontab:

ssh root@
root@OpenWrt:~# echo "42 */4 * * * /etc/" >> /etc/crontabs/root

or you can edit:

root@OpenWrt:~# crontab -e

The crontab -e opens vi editor for /etc/crontabs/root. Also note that I enabled the cron service just to be sure. See OpenWrt cron documentation for details.

Now put there a line like this:

42 */4 * * * /etc/

Note here that I added some random minute 42 to keep DuckDNS from requests waves if all users tries to update their DNS once in a hour. So please take some another minute too.

Then add this script:

wget -4 -q -O /dev/null{YOURDOMAIN}/{YOURTOKEN}

to /etc/ and chmod +x it.

Now you need to enable and restart cron service:

root@OpenWrt:~# /etc/init.d/cron enable
root@OpenWrt:~# /etc/init.d/cron restart
root@OpenWrt:~# logread | grep cron

The last command is useful to see cron logs. You may want to increase cronloglevel in /etc/config/system. If everything worked then in duckdns dashboard you’ll IP will be updated. See "Last time changed" field.

Then your router will be accessible with the new domain. For example for my domain

$ dig

; DiG
;; global options: +cmd
;; Got answer:
;; HEADER   HEADER- opcode: QUERY, status: NOERROR, id: 41868
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;		IN	A


;; Query time: 212 msec
;; WHEN: Mon Apr 22 23:55:45 EEST 2019
;; MSG SIZE  rcvd: 129

Here you can see that DNS server (BTW that’s AdGuard) responded that IP of the domain is i.e. my public IP.

Use regular domain as alias for DDNS

I already have a domain and I would like to use it instead of the DDNS DNS supports this and what I need to do is to add to my domain a new record CNAME with the DDNS But DNS spec allows this only for subdomains. I.e. I can map to the but I can’t do that for for root domain Not sure why but most domain registrants follow the rule. I added a subdomain record and mapped via CNAME to and here is how it resolved now:

$ dig
; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 19506
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;		IN	A


;; Query time: 223 msec
;; WHEN: Sun May 05 14:22:29 EEST 2019
;; MSG SIZE  rcvd: 133

You can see that was firstly resolved to CNAME which then was resolved to my router’s IP This have a downside that now your router’s IP is visible to anyone who would like to hack you.

Fortunately I using CloudFlare which works like a proxy that protects my site from DDoS. It’s free plan allows almost everything that I need. But what is important that I can transfer my domain to CF nameserver and CF allows to map CNAME to root domain So in CF DNS Settings I set the CNAME and now when I try to open then it opened my website from the router. In fact, they don’t do a real alias and domain refers to CF IP address but internally they proxy HTTP requests to

CludFlare DNS settings screenshot
CludFlare DNS settings screenshot

So I configured these domains:

  1. is a CNAME to and please note that the cloud icon is gray which means that CF will not proxy this domain and it will work only as DNS. Thus the will be always resolved to my router’s IP via DDNS as you already saw before in dig command output.
  2. Wildcard * domain i.e. any other subdomain will be also resolved to my router’s IP. In fact you don’t need this, I just wanted to show that you have such possibility.
  3. The root domain and its subdomain www will be proxied (i.e. orange cloud icon) to The real IP of my router is hidden in this case and it’s protected form DDoS by CF.

Now you can check that root domain is resolved to CF proxy:

$ dig

; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 35463
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;			IN	A

;; ANSWER SECTION:		3600	IN	A		3600	IN	A

;; Query time: 51 msec
;; WHEN: Sun May 05 14:05:34 EEST 2019
;; MSG SIZE  rcvd: 94

The IP addresses and belongs to CloudFlare.

Configure uhttpd webserver to work with the dynamic domain

In fact, you can just use the DDNS directly in /etc/config/uhttpd instead of IP address i.e.:

config uhttpd homepage
  option realm homepage
  list listen_http ''
  option home '/tmp/www/'
  option rfc1918_filter '0'

Here I configured my homepage on 80 port but instead of my external IP address I just used my DDNS It’s important that while my domain is it refers to CloudFlare so I can’t use it and I have to use the DDNS.

When eth1 (i.e. wan) network interface is restarted it may receive a new IP. So we have to update our DDNS. We can add a hook on iface up and send the update. So we should trigger the same command that we put into cron. To do so you need to add a hook to /etc/hotplug.d/iface/

case "$ACTION" in

I set it’s prio to 97 to run it after 95-ddns script if you decided to use it instead of self made cron script. Just to avoid conflicts.

To restart uhttpd after external IP was changed you can add the hotplug script:

case "$ACTION" in
/etc/init.d/uhttpd enabled && sleep 30 && /etc/init.d/uhttpd restart

And put it to /etc/hotplug.d/iface/ We set 30 seconds delay to be sure that dns record was updated.

Now let’s try:

# ifconfig eth1 down
# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:2488344 errors:0 dropped:499 overruns:0 frame:0
          TX packets:818007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068100023 (2.8 GiB)  TX bytes:84736706 (80.8 MiB)
# ifconfig eth1 up
# ifconfig eht1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::2c5:f4ff:fe71:1b9a/64 Scope:Link
          RX packets:2487401 errors:0 dropped:499 overruns:0 frame:0
          TX packets:817808 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068008637 (2.8 GiB)  TX bytes:84672103 (80.7 MiB)

# ps | grep uhttpd
 3007 root      1296 S    /usr/sbin/uhttpd -f -h /www -r main -x /cgi-bin -p
 3008 root      1296 S    /usr/sbin/uhttpd -f -h /tmp/www/ -r homepage1 -p
 3018 root      1200 S    grep uhttpd


  1. Stop eth1 interface. At this moment internet goes down.
  2. See eth1 details to be sure that there is no external IP.
  3. Start eth1 with ifconfig eth1 up and see it’s details that IP now obtained.
  4. Check that utthpd process is restarted after 5 seconds. To ensure that it was restarted you can change site name or realm in /etc/confg/uhttpd and then see that the name was changed after restart. Here for example you might note that I changed homepage realm name to homepage1.

In fact, we don’t have to restart the uhttpd if IP wasn’t changed. Also if we detected IP change then we can start uhttpd with the new IP. For example we can update it with uci. It’s not so easy to get ip from interface name but you can see getLocalIp function from ddns scripts

But this solution is much simpler so I decided to keep it.

Protect router from hackers: allow access to HTTP server only to CloudFlare proxy IPs

Since my website is accessible only from CloudFlare so I need to allow CF IPs but deny any others. I denied access to 80 port from /etc/config/firewall file but to allow CF IPs you need to add this script to /etc/firewall.user:

for ip in `wget -qO-`; do
  iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT

The script will fetch a list of CF IPs and allow them via iptables. UPD for some reason sometimes this doesn’t work (probably when connection lost) and I have to login and execute the file manually. So I just enabled 80 port for wan

How to set up Google Talk/Hangout in Pidgin?

Google Talk still works with XMPP so you can use Pidgin and there is an official tutorial Configure Pidgin to connect to Google Talk

But there is missing important part: you’ll get «Not Authorized» error during connect.
The problem is that Google tries to improve security so your Google/Gmail password ideally should never be entered anywere except of Google itself so no any trojan or virus can’t stole your password.
So how can you login into Google Talk without inputting a password from Google Account?
Well, you can generate another password.
Go to My Account, and in the «Sign-in & Security» column, go to Signing in to Google and then App passwords

You’ll see the «Select the app and device you want to generate the app password for.»
Select App: Mail
Select Device: Other
Then input «pidgin»
Press Generate button

Then you’ll see «Your app password for your device»


Copy the generated password and input it into Pidgin account and that’s it.

Nginx Plus Docker Image + lua-resty-openidc for OAuth termination

I need a Docker image with Nginx Plus and configured lua-resty-openidc to use Keycloak OAuth provider.
I made it based on this article Deploying NGINX and NGINX Plus with Docker but there was few additional non trivial steps so here is my result.
Create a folder with nginx plus repo keys (nginx-repo.crt and nginx-repo.key)
Then create a Dockerfile with the following content:

FROM ubuntu:artful

# Download certificate and key from the customer portal (
# and copy to the build context
COPY nginx-repo.crt /etc/ssl/nginx/
COPY nginx-repo.key /etc/ssl/nginx/

# Install NGINX Plus
RUN set -x \
  && apt-get update && apt-get upgrade -y \
  && apt-get install --no-install-recommends --no-install-suggests -y apt-transport-https ca-certificates \
  && apt-get install -y lsb-release wget \
  && wget && apt-key add nginx_signing.key \
  && wget -q -O /etc/apt/apt.conf.d/90nginx \
  && printf "deb `lsb_release -cs` nginx-plus\n" | tee /etc/apt/sources.list.d/nginx-plus.list \
  && apt-get update && apt-get install -y nginx-plus nginx-plus-module-lua nginx-plus-module-ndk luarocks libssl1.0-dev git

RUN set -x \
  && apt-get remove --purge --auto-remove -y \
  && rm -rf /var/lib/apt/lists/*

# Forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
  && ln -sf /dev/stderr /var/log/nginx/error.log

RUN luarocks install lua-resty-openidc
RUN luarocks install lua-cjson
RUN luarocks install lua-resty-string
RUN luarocks install lua-resty-http
RUN luarocks install lua-resty-session
RUN luarocks install lua-resty-jwt



CMD ["nginx", "-g", "daemon off;"]

Here you can see that we installing not only nginx-plus but also nginx-plus-module-lua and nginx-plus-module-ndk modules which are needed to run lua-resty-openidc.
Since open lua-resty-openidc is distributed via luarocks package manager we need to install it too and then install all needed packages via luarocks. For lua-crypto dependency you need to install libssl1.0-dev package with OpenSSL headers and for some other package we needed git, don't ask me why, I have no idea.
FYI: openidc is installed into file /usr/local/share/lua/5.1/resty/openidc.lua

Then you need to build an image with

docker build --no-cache -t nginxplus .

If you have a Docker Registry inside your company you can publish the image there:

docker tag nginxplus your.docker.registry:5000/nginxplus
docker push your.docker.registry:5000/nginxplus

Not you have an image and you can run it. All you need is to mount you server config into /etc/nginx folder. Consider you have a docker-compose.yml file with the following content:

version: '3'
    image: your.docker.registry:5000/nginxplus
    container_name: gw-nginx
      - ~/gateway/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ~/gateway/etc/nginx/conf.d/:/etc/nginx/conf.d/
      - /etc/localtime:/etc/localtime
      - 80:80
    image: jboss/keycloak
    container_name: gw-keycloak
      - /etc/localtime:/etc/localtime
      - 8080:8080
      - KEYCLOAK_USER=root
      - KEYCLOAK_PASSWORD=changeMePlease

Now create a gateway folder:
mkdir ~/gateway
cd ~/gateway
And plase nginx.conf file into ~/gateway/etc/nginx/nginx.conf. The most important is to place this two lines:

http {
  resolver yourdnsip;
  lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
  lua_ssl_verify_depth 5;

For some reason without this lines Lua scripts can’t resolve upstreams.

Create ~/gateway/etc/nginx/conf.d/default.conf file and configure it as descibed in restly-openidc documentation.
Finally you can run it with docker-compose up -d command.

How to expose locally service to Internet


Router proxy over dynamic ip



HOWTO: Create Your Own Self-Signed Certificate with Subject Alternative Names Using OpenSSL in Ubuntu Bash for Window


My main development workstation is a Windows 10 machine, so we’ll approach this from that viewpoint.

Recently, Google Chrome started giving me a warning when I open a site that uses https and self-signed certificate on my local development machine due to some SSL certificate issues like the one below:

Self-Signed SSL Issue in Chrome

or one that is described in this forum post which I originally got.

I made my self-signed certificate using MAKECERT utility previously. Apparently, this tool does not support creating self-signed SSL certificate with Subject Alternative Name (SAN). If anyone knows different, please let me know.

So, after doing some searches, it seems that OpenSSL is the best solution for this.

If you are trying to use OpenSSL on Windows like me, you will probably be scratching your head on where to start. Build from the repository? Ouch. That’s what they called yak shaving. I just want to quickly create my own damn self-signed certificate, not build a factory that can do that. Sure, there is binary installation available here, but after getting it installed and trying to figure out how to make it run nicely with PowerShell, I gave up.

Luckily, Windows 10 now has the ability to run Ubuntu Bash and after playing around with it, this seems to be the best way forward when using openssl.

Setup Ubuntu on Windows 10

To set it up, follow the instruction here.

Install OpenSSL

To install openssl run the following command from the bash shell:

sudo apt-get install openssl

Once installed, you are ready to create your own self-signed certificate.

Creating Self-Signed Certificate

I am using this OpenSSL Ubuntu article as the base, but there are some modifications along the way, so I’ll just explain the way I did it here. If you need further information, please visit that article.

The original article is using SHA1 but we really need to move to something else that is stronger like SHA256. If you are using SHA1 as suggested, you will be getting the Your connection is not private page in Chrome.

Creating Your Working Environment

We will use your user profile root directory (~/ which points to /home/jchandra in my case) to do this work. If you use anything else, you might need to customize the caconfig.cnf and localhost.cnf content below.

To create your environment, run the following in bash:

cd ~/ && mkdir myCA && mkdir -p myCA/signedcerts && mkdir myCA/private && cd myCA

This will create the following directories under your user profile root folder:

Directory Contents
~/myCA contains CA certificate, certificates database, generated certificates, keys, and requests
~/myCA/signedcerts contains copies of each signed certificate
~/myCA/private contains the private key

Create the Certificate Database

To create the database, enter the following in bash:

echo '01' > serial && touch index.txt

Create Certificate Authority Configuration File

Create caconfig.cnf using vim or nano or whatever Linux text-editor of your choice.

To create it using vim, do the following:

    vim ~/myCA/caconfig.cnf

To create it using nano do the following:

    nano ~/myCA/caconfig.cnf

The content should be like so:

# My sample caconfig.cnf file.
# Default configuration to use when one is not provided on the command line.
[ ca ]
default_ca = local_ca
# Default location of directories and files needed to generate certificates.
[ local_ca ]
dir = /home/jchandra/myCA
certificate = $dir/cacert.pem
database = $dir/index.txt
new_certs_dir = $dir/signedcerts
private_key = $dir/private/cakey.pem
serial = $dir/serial
# Default expiration and encryption policies for certificates
default_crl_days = 365
default_days = 1825
# sha1 is no longer recommended, we will be using sha256
default_md = sha256
policy = local_ca_policy
x509_extensions = local_ca_extensions
# Copy extensions specified in the certificate request
copy_extensions = copy
# Default policy to use when generating server certificates. 
# The following fields must be defined in the server certificate.
# It is the correct content.
[ local_ca_policy ]
commonName = supplied
stateOrProvinceName = supplied
countryName = supplied
emailAddress = supplied
organizationName = supplied
organizationalUnitName = supplied
# x509 extensions to use when generating server certificates
[ local_ca_extensions ]
basicConstraints = CA:false
# The default root certificate generation policy
[ req ]
default_bits = 2048
default_keyfile = /home/jchandra/myCA/private/cakey.pem
# sha1 is no longer recommended, we will be using sha256
default_md = sha256
prompt = no
distinguished_name = root_ca_distinguished_name
x509_extensions = root_ca_extensions
# Root Certificate Authority distinguished name
[ root_ca_distinguished_name ]
commonName = InvoiceSmashDev Root Certificate Authority
stateOrProvinceName = NSW
countryName = AU
emailAddress =
organizationName = Coupa InvoiceSmash
organizationalUnitName = Development
[ root_ca_extensions ]
basicConstraints = CA:true

Caveats for caconfig.cnf:

  1. In [ local_ca ] section, make sure you replace <username> with your Ubuntu username that you created when you setup Ubuntu on Windows 10. Mine for example is dir = /home/jchandra/myCA. NOTE: DO NOT USE ~/myCA. It does not work..
    Similarly, change the default_keyfile setting in [ req ] section to be the same.
  2. Leave the [ local_ca_policy ] section alone. commonName = supplied, etc. are correct and not to be overwritten.
  3. In [ root_ca_distinguished_name ] section, replace all values to your own settings.

Creating Your Test Certificate Authority

  1. Run the following command so openssl will pick the settings automatically:
export OPENSSL_CONF=~/myCA/caconfig.cnf
  1. Generate the Certificate Authority (CA) certificate:
openssl req -x509 -newkey rsa:2048 -out cacert.pem -outform PEM -days 1825
  1. Enter and retype the password you wish to use to import/export the certificate.
    NOTE: Remember this password, you will need it throughout this walk-through.

Once you are done you should have the following files:

File Content
~/myCA/cacert.pem CA public certificate
~/myCA/private/cakey.pem CA private key

In Windows, we will be using .crt file instead, so create one using the following command:

openssl x509 -in cacert.pem -out cacert.crt

Creating Your Self-Signed Certificate with Subject Alternative Name (SAN)

Now that you have your CA, you can create the actual self-signed SSL certificate.

But first, we need to create the configuration file for it. So again, use vim or nano, etc. to create the file. In this example, I will call mine localhost.cnf since that’s the server that I am going to be using to test my development code. You can call it whatever you want. Just make sure you use the right filename in the export command later on.

Below is the content of ~/myCA/localhost.cnf:

# localhost.cnf

[ req ]
prompt = no
distinguished_name = server_distinguished_name
req_extensions = v3_req

[ server_distinguished_name ]
commonName = localhost
stateOrProvinceName = NSW
countryName = AU
emailAddress =
organizationName = Coupa InvoiceSmash
organizationalUnitName = Development

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
DNS.0 = localhost
DNS.1 = invoicesmash.local

Caveats for localhost.conf

  1. Change the values in [ server_distinguished_name ] section to match your own settings.
  2. In [ alt_names ] section, change the value for DNS.0 and DNS.1 to whatever you need. In my case, I test my web application using https://localhost:44300, therefore the correct value for me is DNS.0 = localhost. I am not sure what to do with DNS.1 so, I just changed it to DNS.1 = invoicesmash.local. If so happen that I have a host entry in my hosts file that matches this (mapped to IP Address, it should still work.

Once you created the configuration file, you need to export it:

export OPENSSL_CONF=~/myCA/localhost.cnf

Now generate the certificate and key:

openssl req -newkey rsa:2048 -keyout tempkey.pem -keyform PEM -out tempreq.pem -outform PEM

Again, provide the password that you previously entered and wait for the command complete.

Next, run the following to create the unencrypted key file:

openssl rsa < tempkey.pem > server_key.pem

Again, provide the password that you previously entered and wait for the command to be completed.

Now switch back the export to caconfig.cnf so we can sign the new certificate request with the CA:

export OPENSSL_CONF=~/myCA/caconfig.cnf

And sign it:

openssl ca -in tempreq.pem -out server_crt.pem

Again, provide the password that you previously entered and wait for the command to be completed and just type in Y whenever it asks you for [y/n].

Now you should have your self-signed certificate and the key.

File Content
~/myCA/server_crt.pem Self signed SSL certificate
~/myCA/server_key.pem Self signed SSL certificate private key

Converting the .pem files to .pfx for usage by Windows

In Windows, we mostly use .pfx and .crt files. Therefore, we need to convert the .pem file to .pfx. We’ll use cat to combine server_key.pem and server_crt.pem into a file called hold.pem. Then we will do the conversion using openssl pkcs12 command as shown below. You can use whatever text you want to describe your new .pfx file in the -name parameter.

cat server_key.pem server_crt.pem > hold.pem
openssl pkcs12 -export -out localhost.pfx -in hold.pem -name "InvoiceSmash Dev Self-Signed SSL Certificate"

Again, provide the password that you previously entered and wait for the command to be completed.

Now you should have the following files that we will use in the next section.

File Content
~/myCA/localhost.pfx Self signed SSL certificate in PKCS#12 format
~/myCA/cacert.crt CA certificate used to signed the self-signed certificate

Copy the PFX and CA Certificate to a Windows location and Install the CA & PFX into Windows

Copying PFX and CA from Ubuntu bash to Windows Side

It seems it is forbidden to touch the Linux Subsystem from Windows side, but you can touch Windows side from Linux side, so that’s what we are going to do.

To copy the files from inside Ubuntu, you need to know where you want to copy the files to on Windows side. For example, if I want to copy the files to C:\certificates folder, I’d do something like cp {localhost.pfx,cacert.crt} /mnt/c/certificates.

See this faq if you want to know more about this.

Install the new CA and self-signed certificates

To install the CA and self-signed certificates, all you need to do is double-click the file from where you copied them into.

Once clicked, just follow the Install Certificate steps and you should be good.

For the CA Certificate (`cacert.crt), make sure you install it to Local Machine, Trusted Root Certification Authorities.

For the self-signed certificate (localhost.pfx), install it to Local Machine, enter the password as previously, and store it in Personal.

That’s it. Now you can configure your application to use the new certificate. In my situation, I just need to configure the Azure Cloud Service project to use that certificate as pointed by this document. I do not know your workflow, so it might be different.


view raw
hosted with ❤ by GitHub


How to: LetsEncrypt HTTPS on OpenWRT with uhttpd

My old router TP Link WRN740N hosting my homepage and it’s too small to handle full LetsEncrypt certbot installer and OpenSSL. So if you want to enable HTTPS you have to run certbot on some other machine and then upload to router.
Here I would like to show how I did that.

Manual installation

The fisrt step is to use manual certs installation from my laptop and renew them after 3 month. Actually this can be automated too latter.

Now, lets generate certs for your domain:

$ sudo certbot certonly --manual --preferred-challenges http

Answer all the questions and it will ask you to upload a file to your router:

Create a file containing just this data:


And make it available on your web server at this URL:

Press Enter to Continue

You need to create the folder on router:

# mkdir -p ./.well-known/acme-challenge

Then upload the files via SCP from your computer to router:

$ echo "1Fyw2Q3IARaG0G6RVUJS587HG_Ou6pKpBLZC-_KeC4g.OKKBaAC2SgfXHQyvgKrLkn3zyCNH82xHgKsMg9OQQJE" > 1Fyw2Q3IARaG0G6RVUJS587HG_Ou6pKpBLZC-_KeC4g

$ scp ./1Fyw2Q3IARaG0G6RVUJS587HG_Ou6pKpBLZC-_KeC4g root@

BTW, there is no any analogue of WinSCP for Linux but you can try to run it on Wine.

Then go back to certbot and press Enter. It will check that files are in place and accessible from web.

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2018-01-13. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot

It generated two files: privkey.pem and fullchain.pem (which is not public key!)

Finally, you need to convert the private key and the certificate from the ASCII-armored PEM format to the more economical binary DER format used by uhttpd:

openssl rsa -in privkey.pem -outform DER -out uhttpd.key
openssl x509 -in fullchain.pem -outform DER -out uhttpd.crt

Upload them to the router

$ scp uhttpd.crt root@
$ scp uhttpd.key root@

On the router you need to install uhttpd-mod-tls:

# opkg update
# opkg install uhttpd-mod-tls

edit /etc/config/uhttpd as described in docs i.e. like this

config uhttpd ''
    list listen_http ''
    list listen_https ''
    option redirect_https '1'
    option home '/www'
    option rfc1918_filter '0'
    option cert '/etc/uhttpd.crt'
    option key '/etc/uhttpd.key'

Here is my public static IP.
Note that 443 port should be opened in /etc/config/firewall:

config rule
    option target 'ACCEPT'
    option src 'wan'
    option proto 'tcp'
    option dest_port '443'
    option name 'HTTPS'

Then restart the firewall and uhttpd server:

# /etc/init.d/firewall restart
# /etc/init.d/uhttpd restart

Now try your site in browser. But please check your site latter: I noticed that uhttpd was down but after restart it worked well.

Renewing cert

… So it passed 3 months and my cert got expired and I need to renew it. It’s funny that today is an Old New Year and from window I hear some concert on my street.
This time I decided not to use manual mode and use standalone mode instead: certbot starts itself an https server on 443 port and I need to shut down webserver on my router and enable 443 port forwarding from router to my laptop.

So let’s do that:
1. Connect to router and stop uhttpd service:

$ ssh root@
# /etc/init.d/uhttpd stop
  1. Enable 443 port forwarding. Download firewall config from router:
$ scp root@ ./
  1. Edit and comment out current rule for 443 port. If you used that one that I mentioned before then:
# temporarry comment out the rule
#config rule
#  option target 'ACCEPT'
#  option src 'wan'
#  option proto 'tcp'
#  option dest_port '443'
#  option name 'HTTPS'

Then add a HTTPS forwarding rule:

config 'redirect'
    option 'name' 'HTTPS_to_laptop'
    option 'src' 'wan'
    option 'proto' 'tcp'
    option 'src_dport' '443'
    option 'dest_ip' ''
    option 'dest_port' '443'
    option 'target' 'DNAT'
    option 'dest' 'lan'

Where is the ip of the your laptop. Run ifconfig to see it.

  1. Upload the new firewall config to router:
$ scp ./firewall root@
  1. Now restart firewall service on router:
# /etc/init.d/firewall restart
  1. Now your laptop’s 443 port is exposed to the world. So lets tun certbot:
$ sudo certbot certonly --standalone --preferred-challenges tls-sni
  1. Then convert the keys do DER format and upload to router as was described above. Then disable forwardind firewall rule and rollback previous and restart firewall and uhttpd.

Now you certs was renewed.

Transliteration to ASCII

If you need to make a translitartion from any language to ASCII symbols you can use a Transliterator from ICU4J.

private static final String TRANSLITERATION_RULE = "Any-Latin; Latin-ASCII";

private static String transliterate(String name) {
    String ascii = TRANSLITERATOR.transliterate(name);
    // Some Russian names may contain Soft Sign ( Ь ) and ( Ъ ) that may cause error
    ascii = ascii.replaceAll("[ʹʺ]", "");
    return ascii;

ICU Transform Demonstration
1) Select «Names» from «Inset sample» combo box.
2) Insert the rule «Any-Latin; Latin-ASCII» to the «Compound 1» fields.
3) Press «Transform» button

Also a good example:
How do I convert Chinese characters to their Latin equivalents?

What are the system Transliterators available with ICU4J?

[Чтиво] Три очень классные практические статьи-инструкции для программистов

Периодически почитываю что накопилось в закладках. Вот отличные прагматичные статьи вкурив которые сразу получите +2 к экспе девелопера.
Экстремальное программирование: Pair Programming чёткая инструкция по делу что такое парное программирование как его делать на практике и самое главное как его НЕ делать.

Парное программирование в аутсорсинге: достижение взаимопонимания с техническими специалистами заказчика Классная статья которую обязан прочесть каждый программист-аутсорсер. Вывод: если у вас есть часть удалённой команды то парно педалить код вы просто обязаны.

Практика рефакторинга в больших проектах в сотый раз о вечном. Для тех кто ещё халтурит и не прочёл замечательную книгу Working Effectively with Legacy Code

[Grails] Избегайте использования Environment вне файлов конфигураций

Внутри Grails есть великолепный механизм для условного выполнения кода в зависимости от текущей среды (Environment).
Например внутри DataSource.groovy можно указывать различные настройки базы данных:

// environment specific settings
environments {
    development {
        dataSource {
            dbCreate = "create-drop"
            url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000"
    production {
        dataSource {
            dbCreate = "update"
            url = "jdbc:h2:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000"

Наверняка у вас почти в каждом файле конфигурации есть настройки под среду.

Но очень часто я натыкался на то что класс Environment.current начинает использоваться внутри контроллеров, вьюх и сервисов. А стандартный тег if и вовсе имеет атрибут для проверки текущего окружения:

<g:if env="test"> ... </g:if>

Я постепенно пришёл к тому что этого следует избегать, потому что теряется читаемость и гибкость. Вместо этого лучше явно создать опцию в Config.groovy, включать или выключать её в зависимости от среды а и потом проверять её. Вот например что делает этот код?

    <g:if env="test">
        <meta name="controller" content="${controllerName}"/>
        <meta name="action" content="${actionName}"/>

На самом деле этот код добавляет имя контроллера и экшена которые отрисовли страницу и они потом проверяются функциональным тестом. Это это бывает очень удобно для дебага.
А теперь давайте мы создадим в файле Config.groovy опцию

environments {
    development {
        com.example.showActionNameInPageMeta = true
    test {
        com.example.showActionNameInPageMeta = true
    production {
        com.example.showActionNameInPageMeta = false

и теперь

    <g:if test="${}">
        <meta name="controller" content="${controllerName}"/>
        <meta name="action" content="${actionName}"/>

В результате эта опция улучшила читаемость кода. Кроме того теперь мы всегда можем добавить новое окружение, например testFunctional для функциональных тестов и внутри включить нашу опцию, при этом не переписывая код.
И что особенно важно, что в случае каких то проблем в продакшене сисадмин может поменять конфигурацию и перезапустить сервер не перекомпилируя приложение.
Кроме того, все такие опции теперь не размазаны по коду а лежат в одном месте где их все можно посмотреть.

В более широком смысле такой подход называется Feature flag, и его активно используют например в Amazon.

[Grails] favicon.ico и robots.txt

Есть две важные мелочи на которые обычно внимания не обращаем, но потом приходится всё таки с ними повозится.

Где должна быть иконка для закладки?

Например favicon.ico — очень важная маленькая вещь. Иконка значительно увеличивает узнаваемость закладки.
По умолчанию в main layout который генерирует Grails есть прописанный путь к favicon:

    <link rel="shortcut icon" href="${resource(dir: 'images', file: 'favicon.ico')}" type="image/x-icon">

Броузер должен понять что фавикон лежит по URL /static/images/favicon.ico

Оказалось что броузеры не любят когда фавикон находится не в корне сайта, т.е. не /favicon.ico. Например если сайт загружен через фрейм или просто открывается картинка с сайта то некоторые броузеры пытаются запросить фавикон с корня сайта.

А если файл не найден, или у пользователя например нет прав на просмотра этого адреса, то в серверных логах будет появляться отчёты об ошибках.

Как решать?

Первое что приходит на ум это прописать в UrlMappings перенаправление на правильный путь к иконке. Но конечно же не стоит этого делать, ведь мы помним что всё что лежит в папке web-app становится доступным по прямо ссылке.
Так что нам достаточно просто перенести фавикон в директорию уровнем выше:

mv grails-app/web-app/images/favicon.ico grails-app/web-app/favicon.ico

И ещё подправить наш лейаут grails-app/views/layouts/main.gsp

    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">

Теперь фавкион доступен прямо из корня сайта /favicon.ico.
На форумах пишут что на мобильные устройства ещё запрашивают свои иконки в другом разрешении, но у меня сейчас по рукой их нет чтобы проверить.

Защищаемся от поисковика

Наверное многие помнят историю когда все SMS отправленные с сайта Билайна стали доступны через поисковик яндекса. Это произошло по стечению обстоятельств, одним из которых было то что разработчики не запретили поисковикам индексировать страницы отправки СМС.
Чтобы себя уберечь вам следует тоже запретить поисковикам индексировтаь админку вашего сайта или другие приватные разделы.
Для этого в корень вашего сайта вы должны разметсить файл robots.txt с примерно таким содержимым:

User-agent: *
Disallow: /
Allow: /faq/

Поисковики когда будут индексировать ваш сайт будут соблюдать эти правила и не будут индексировать весь ваш сайт кроме страницы /faq/

Как не сложно догадаться файл robots.txt тоже нужно положить в web-app чтобы он стал доступен по прямой ссылке.

Если вам уже не достаточно простых настроек robots.txt то вам стоит обратить внимание на его XML развитие Sitemaps.
Для Grails даже есть плагин grails-sitemapper, но он выглядит заброшенным так что десять раз перепроверьте его содержимое.

По хорошему эти две вещи нужно сделать частью стандартного Grails. Как нибудь создам тикет в трекере на эту тему.

[Grails] i18n: LocaleResolver, Accept-Language

Warning!!! I made plugin grails-locale-configuration-plugin that replace this solution. [Grails] Сегодня опубликовал стабильную версию grails-locale-configuration-plugin
Also this chapter of book may be interest for you

В каждом броузере пользователь может настроить предпочтительные языки. Например в хроме Settings / Languages (chrome://settings/languages) они выглядят так:
chrome languages settings
Тут указано что пользователь хочет видеть американский английский, или любой английский, но если его не будет тогда на русском, а если и на русском не будет, тогда давайте уже на украинском.

Эти параметры передаются в каждом запросе на сервер через заголовок Accept-Language:


С помощью параметра q (quality value) мы передаём приоритет от 0 до 1.

Grails умеет работать с этим заголовком и автоматически переключать локаль. Если есть интернационализация для этого языка то она сразу же автоматически показывается и всё хорошо. А вот если не находится i18n/ для нужной локали, тогда отображается текст по умолчанию из i18n/, обычно английский.

Хоть текст и будет английский, но системная локаль будет установлена в ту которую пользователь запросил больше всего.
Например, у нас есть пользователь у которого украинский язык на первом месте и русский на втором.
Запросив страницу Grails запомнит украинскую локаль в сессии, но отобразит всё на английском.
Это не всегда хорошо, особенно если ваш сайт жёстко поддерживает только несколько локалей. Например в зависимости от локали вы разные флажки отображаете.
Скорее всего что захотите ограничить локали которые поддерживаете.
Для этого создайте в файле Config.groovy опцию supportedLocales:

supportedLocales = [Locale.ENGLISH, new Locale('RU')]

А теперь создадим фильтр который будет проверять локаль

class LocaleResolverFilterFilters {

    def filters = {
        all(controller: '*', action: '*') {
            before = {
                // Сначала ищем такую же локаль, если не нашли то локаль с тем же языком, если не нашли то по умолчанию английский
                Locale selectedLocale
                LocaleResolver localeResolver = RequestContextUtils.getLocaleResolver(request)
                List&amp;amp;amp;lt;Locale&amp;amp;amp;gt; supportedLocales = grailsApplication.config.supportedLocales
                if (request.locale in supportedLocales) {
                    selectedLocale = request.locale
                } else {
                    selectedLocale = findLocaleWithSameLanguage(request, supportedLocales)
                selectedLocale = selectedLocale ?: Locale.ENGLISH
                localeResolver.setLocale(request, response, selectedLocale)

    private Locale findLocaleWithSameLanguage(HttpServletRequest request, List&amp;amp;amp;lt;Locale&amp;amp;amp;gt; supportedLocales) {
        supportedLocales.find({ it.language == request.locale.language })

В дальнейшем из кода получить локаль мы можем через объект запроса request.locale а список всех локалей предпочитаемых пользователем через request.locales.
Ту локаль с которой мы отрисовали страницу можно увидеть через объект ответа: response.locale.
Смотрите демо приложение на гитхабе.

Grails request.locale

Что ещё почитать по теме: