Category: In English

Setup DDNS/DynDNS in OpenWrt

I serving my small homepage stokito.com directly from my router with OpenWrt Linux — thus I don’t have to pay for any hosting because the router is anyway always online. My provider gives to my router some public IP which is changes sometimes: maybe about once per week. That is fine for me but I have to change it manually. Of course I can buy a static public IP from my internet provider but my goal is to have cheapest as possible website. So I need to automatically and periodically update the DNS A record with my current IP.

To solve this problem people uses Dynamic DNS (DDNS) which is de facto some pseudo protocol when router itself constantly registers it’s current IP on the DNS server. Most routers already have support of some DDNS providers where most popular are DynDNS.com and NO-IP.com or even manufactures like ASUS may have their own DDNS. Gamers and owners of IP cameras very often using this.

Unfortunately my DNS registrar doesn’t support DDNS protocol so I have to use some another. A good news is that OpenWrt already have a package ddns-scripts witch supports a lot of servers. I checked almost all DDNS providers that are supported my and choose DuckDNS.org.

DynDNS.com looks like one of the first DDNS providers and some other even tries to implement it’s API. But it’s paid and that’s not acceptable for me to because with the same money I just can buy a static IP. The NO-IP.com have some strange API problems with refreshing IP so there is even a separate script for OpenWrt ddns-scripts_no-ip_com. In the same time DuckDNS looks like was made by programmers for programmers. It allows to quickly register with Google account then they give you a generated random token instead of password and they have a good documentation.

So the API is so simple that I even was wondered why it was created the ddns-scripts package. In fact, all what you need to do is to register on DuckDNS and receive your token (i.e. password) then login to your OpenWrt LUCI admin panel, then open System / Scheduled Tasks and add the following line:

* */4 * * * wget -4 -q -O /dev/null http://www.duckdns.org/update/{YOURDOMAIN}/{YOURTOKEN}

i.e. each 4 hours you will send a HTTP get request to DuckDNS.

Then you can check logs of cron task in syslogs: System / System Logs. For example for my domain stokito.duckdns.org:

Mon Apr 22 18:52:00 2019 cron.info crond[12903]: USER root pid 14005 cmd wget -4 -q -O /dev/null http://www.duckdns.org/update/stokito/6c5se9d3-5220-440-b46-6873f9a

But for some reason this setup via Luci doesn’t worked for me so better to do the same with command line. Login and edit crontab:

ssh root@192.168.1.1
root@OpenWrt:~# echo "42 */4 * * * /etc/update_ddns.sh" >> /etc/crontabs/root

or you can edit:

root@OpenWrt:~# crontab -e

The crontab -e opens vi editor for /etc/crontabs/root. Also note that I enabled the cron service just to be sure. See OpenWrt cron documentation for details.

Now put there a line like this:

42 */4 * * * /etc/update_ddns.sh

Note here that I added some random minute 42 to keep DuckDNS from requests waves if all users tries to update their DNS once in a hour. So please take some another minute too.

Then add this script:

wget -4 -q -O /dev/null http://www.duckdns.org/update/{YOURDOMAIN}/{YOURTOKEN}

to /etc/update_ddns.sh and chmod +x it.

Now you need to enable and restart cron service:

root@OpenWrt:~# /etc/init.d/cron enable
root@OpenWrt:~# /etc/init.d/cron restart
root@OpenWrt:~# logread | grep cron

The last command is useful to see cron logs. You may want to increase cronloglevel in /etc/config/system. If everything worked then in duckdns dashboard you’ll IP will be updated. See "Last time changed" field.

Then your router will be accessible with the new domain. For example for my domain

$ dig stokito.duckdns.org

; DiG stokito.duckdns.org
;; global options: +cmd
;; Got answer:
;; HEADER   HEADER- opcode: QUERY, status: NOERROR, id: 41868
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;baba.stokito.com.		IN	A

;; ANSWER SECTION:
stokito.duckdns.org.	60	IN	A	77.122.151.58

;; Query time: 212 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Mon Apr 22 23:55:45 EEST 2019
;; MSG SIZE  rcvd: 129

Here you can see that DNS server 176.103.130.131 (BTW that’s AdGuard) responded that IP of the domain stokito.duckdns.org. is 77.122.151.58 i.e. my public IP.

Use regular domain as alias for DDNS

I already have a domain stokito.com and I would like to use it instead of the DDNS stokito.duckdns.org. DNS supports this and what I need to do is to add to my domain stokito.com a new record CNAME with the DDNS stokito.duckdns.org. But DNS spec allows this only for subdomains. I.e. I can map blog.stokito.com to the stokito.duckdns.org but I can’t do that for for root domain stokito.com. Not sure why but most domain registrants follow the rule. I added a subdomain record router.stokito.com and mapped via CNAME to stokito.duckdns.org and here is how it resolved now:

$ dig router.stokito.com
; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu router.stokito.com
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 19506
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;router.stokito.com.		IN	A

;; ANSWER SECTION:
router.stokito.com.	60	IN	CNAME	stokito.duckdns.org.
stokito.duckdns.org.	60	IN	A	77.122.151.58

;; Query time: 223 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Sun May 05 14:22:29 EEST 2019
;; MSG SIZE  rcvd: 133

You can see that router.stokito.com was firstly resolved to CNAME stokito.duckdns.org. which then was resolved to my router’s IP 77.122.189.58. This have a downside that now your router’s IP is visible to anyone who would like to hack you.

Fortunately I using CloudFlare which works like a proxy that protects my site from DDoS. It’s free plan allows almost everything that I need. But what is important that I can transfer my domain to CF nameserver and CF allows to map CNAME to root domain stokito.com. So in CF DNS Settings I set the CNAME and now when I try to open stokito.com then it opened my website from the router. In fact, they don’t do a real alias and stokito.com domain refers to CF IP address but internally they proxy HTTP requests to stokito.duckdns.org.

CludFlare DNS settings screenshot
CludFlare DNS settings screenshot

So I configured these domains:

  1. router.stokito.com is a CNAME to stokito.duckdns.org and please note that the cloud icon is gray which means that CF will not proxy this domain and it will work only as DNS. Thus the router.stokito.com will be always resolved to my router’s IP via DDNS stokito.duckdns.org as you already saw before in dig command output.
  2. Wildcard * domain i.e. any other subdomain will be also resolved to my router’s IP. In fact you don’t need this, I just wanted to show that you have such possibility.
  3. The root domain stokito.com and its subdomain www will be proxied (i.e. orange cloud icon) to stokito.duckdns.org. The real IP of my router is hidden in this case and it’s protected form DDoS by CF.

Now you can check that root domain stokito.com is resolved to CF proxy:

$ dig stokito.com

; DiG 9.11.5-P1-1ubuntu2.3-Ubuntu stokito.com
;; global options: +cmd
;; Got answer:
;; HEADER opcode: QUERY, status: NOERROR, id: 35463
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stokito.com.			IN	A

;; ANSWER SECTION:
stokito.com.		3600	IN	A	104.28.4.8
stokito.com.		3600	IN	A	104.28.5.8

;; Query time: 51 msec
;; SERVER: 176.103.130.131#53(176.103.130.131)
;; WHEN: Sun May 05 14:05:34 EEST 2019
;; MSG SIZE  rcvd: 94

The IP addresses 104.28.4.8 and 104.28.5.8 belongs to CloudFlare.

Configure uhttpd webserver to work with the dynamic domain

In fact, you can just use the DDNS directly in /etc/config/uhttpd instead of IP address i.e.:

config uhttpd homepage
  option realm homepage
  list listen_http 'stokito.duckdns.org:80'
  option home '/tmp/www/stokito.com'
  option rfc1918_filter '0'

Here I configured my homepage on 80 port but instead of my external IP address 77.122.189.58 I just used my DDNS stokito.duckdns.org. It’s important that while my domain is stokito.com it refers to CloudFlare so I can’t use it and I have to use the DDNS.

When eth1 (i.e. wan) network interface is restarted it may receive a new IP. So we have to update our DDNS. We can add a hook on iface up and send the update. So we should trigger the same command that we put into cron. To do so you need to add a hook to /etc/hotplug.d/iface/96-update-ddns.sh:

#!/bin/sh
case "$ACTION" in
ifup)
/etc/update_ddns.sh
;;
esac

I set it’s prio to 97 to run it after 95-ddns script if you decided to use it instead of self made cron script. Just to avoid conflicts.

To restart uhttpd after external IP was changed you can add the hotplug script:

#!/bin/sh
case "$ACTION" in
ifup)
/etc/init.d/uhttpd enabled && sleep 30 && /etc/init.d/uhttpd restart
;;
esac

And put it to /etc/hotplug.d/iface/97-homepage.sh. We set 30 seconds delay to be sure that dns record was updated.

Now let’s try:

# ifconfig eth1 down
# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:2488344 errors:0 dropped:499 overruns:0 frame:0
          TX packets:818007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068100023 (2.8 GiB)  TX bytes:84736706 (80.8 MiB)
          Interrupt:4 
# ifconfig eth1 up
# ifconfig eht1
eth1      Link encap:Ethernet  HWaddr 00:C5:F4:71:1B:9A  
          inet addr:77.122.151.58  Bcast:77.122.151.255  Mask:255.255.255.0
          inet6 addr: fe80::2c5:f4ff:fe71:1b9a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2487401 errors:0 dropped:499 overruns:0 frame:0
          TX packets:817808 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3068008637 (2.8 GiB)  TX bytes:84672103 (80.7 MiB)
          Interrupt:4 

# ps | grep uhttpd
 3007 root      1296 S    /usr/sbin/uhttpd -f -h /www -r main -x /cgi-bin -p 192.168.1.1
 3008 root      1296 S    /usr/sbin/uhttpd -f -h /tmp/www/stokito.com -r homepage1 -p stokito.duckdns.org:80
 3018 root      1200 S    grep uhttpd

Explanation:

  1. Stop eth1 interface. At this moment internet goes down.
  2. See eth1 details to be sure that there is no external IP.
  3. Start eth1 with ifconfig eth1 up and see it’s details that IP now obtained.
  4. Check that utthpd process is restarted after 5 seconds. To ensure that it was restarted you can change site name or realm in /etc/confg/uhttpd and then see that the name was changed after restart. Here for example you might note that I changed homepage realm name to homepage1.

In fact, we don’t have to restart the uhttpd if IP wasn’t changed. Also if we detected IP change then we can start uhttpd with the new IP. For example we can update it with uci. It’s not so easy to get ip from interface name but you can see getLocalIp function from ddns scripts dynamic_dns_functions.sh.

But this solution is much simpler so I decided to keep it.

Protect router from hackers: allow access to HTTP server only to CloudFlare proxy IPs

Since my website is accessible only from CloudFlare so I need to allow CF IPs but deny any others. I denied access to 80 port from /etc/config/firewall file but to allow CF IPs you need to add this script to /etc/firewall.user:

for ip in `wget -qO- http://www.cloudflare.com/ips-v4`; do
  iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT
done

The script will fetch a list of CF IPs and allow them via iptables.

Реклама

How to expose locally service to Internet

[LinkSet] Compatibility

Theoretical part

 

Articles from Oracle

http://JEP 223: New Version-String Scheme

 

Six kinds of compatibility

Each Joda-Time relase has descriptions of incompatible changes categorized by six kinds:

Compatibility between 2.8 and 2.9
———————————
Build system — Yes
Binary compatible — Yes
Source compatible — Yes
Serialization compatible — Yes
Data compatible — Yes
— DateTimeZone data updated to version 2015g
Semantic compatible — Yes

When binary compatibilities are broken then Majour Version changed. For example v2.0 changelist

Explanation from  Stephen Colebourne, author of Joda-Time:

Build system

Not part of compatibility, just a fact about the build system in use

Example:  in v2.2

 - Ant build removed. Build only on Maven now.

Also, I think, it may be changing in artifact coordinates: groupId, artifactId or even changed repository. Maybe some changes in manifest, for example required dependencies. Or added OSGI manifest info. Maybe some classes was repackaged. Or old artifact was built with ANT and doesn’t contains a Maven’s pom.xml manifest like horrible Xerex library  and then was mavenized.

It also may be recompilation with optimizations or without debug information.

But I think that changing in packaging may be a separate kind of compatibility change.

Binary compatible and Source compatible

Whether the whole API is binary compatible or source compatible. See this document

Serialization compatible

Whether the serialization format is compatible with the previous version.

Data compatible

Whether the data, time-zone data in this case, has changed.

For example: some time zone was changed or even removed. If your database stores old timezone id and application trying to create a date object with this timezone id you’ll get an exception.

Semantic compatible

Whether there is some other semantic change. This is relevant when the classes and methods have not changed, but the meaning of a method has.

See section Serialization incompitables below

For example:  v2.0 has a lot of semantic changes

Previously, DateTimeZone.forID matched time zone names case-insensitively, now it is case-sensitive

I think it always hard to separate semantic changes when dealing with bugs. For example, JDK has bug when using null as key in Map

And that was a question is this a bug or feature.

Another one example, is Apache Commons Lang library. In version 3 the methods StringUtils.isAlpha(), isNumeric() and isAlphanumeric() now all return false when passed an empty String. Previously they returned true.

Semantic looks like correlated with behavior compatibility. When changed not signatures but they flow or contract.

Also it’s often situation when behavior was just undefined by contract. For example old class Vector doesn’t declare behavior about removing elements during iteration.  That’s why was created new class ArrayList and Vector was accepted as Superseded.
So, deprecation, as any changes in contract may be also some kind of semantic change.

 

Serialization incompitables

Compatible incopatibility

Also I think that it should be some kind of «reverse compatibility» that means a contract usage. For example, I saw an issue when subclass of HashMap doesn’t follow a contract. This change was compatible in all ways but all clients become incompatible. How to predict it, I don’t know.

Each of developer has experience when you can’t change something in your API because there is some clients that abuse it or implemented incorrectly. So it should have it’s own classification and developers should think about this kind of reverse incompatibility.

Complexity compatibility

I mean the complexity of program in wide sense: algorithm speed, it’s derivate and consumption of resources (memory, i/o, CPU, disk space), and even source code beautification and structural changes like refactoring. You know,  some algorithms may use more memory but use low processor.

For example, in some early versions of 64 bit JDK your application may fail with OutOfMemory error. This was change absolutely compatible in all categories mentioned before.

Another one sample is when new version of program is working more slowly than previous.

It may not change a contract and flow or anything else may it’s also may be changes that requires an attention.

 

Tool for checking API and BPI

Service provider interface (SPI)

https://stackoverflow.com/questions/2954372/difference-between-spi-and-api
https://en.wikipedia.org/wiki/Service_provider_interface

Also interesting

JAR Hell

[LinkSet] Dependency duplicates in Maven and Jar Hell with Java Classloader

Theoretical part:
Java Classloader

Maven: Introduction to the Dependency Mechanism

What is JAR hell? (Or is it classpath hell? Or dependency hell?)

Jar Hell made Easy — Demystifying the classpath with jHades
See JHades documentation, it very useful to find overlapping jars.

Another tool for dealing with Jar Hell is Weblogic Classloader Analysis Tool.

One of the most problematic dependency is Xeres Xerces Hell. It’s a good example how not to do library.

This presentation is a great resource about Jar Hell and the different type of classpath related exceptions:

But little bit boring.

Maven Dependency Wrangling

dealing with dependency chain issues in maven

Maven Enforcer plugin has extra rule
Ban Duplicate Classes
Also it should be very useful if you have legacy project that still runs under Java 6 or 7 you should avoid dependencies compiled with never Java 8.
You can use enforceBytecodeVersion

Please also make sure that you also specified requireJavaVersion (to compile), requireUpperBoundDeps, requireReleaseDeps, requirePluginVersions and other useful standard rules .

Also if you have a submodules in project it will be also useful ono-extra-enforcer-rules

So, your enforcer rules may looks like:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-enforcer-plugin</artifactId>
    <version>1.4.1</version>
    <executions>
        <execution>
            <id>enforce</id>
            <goals>
                <goal>enforce</goal>
            </goals>
            <configuration>
                <rules>
                    <bannedPlugins>
                        <!-- will only display a warning but does not fail the build. -->
                        <level>WARN</level>
                        <excludes>
                            <exclude>org.apache.maven.plugins:maven-verifier-plugin</exclude>
                        </excludes>
                        <message>Please consider using the maven-invoker-plugin (http://maven.apache.org/plugins/maven-invoker-plugin/)!</message>
                    </bannedPlugins>
                    <requireMavenVersion>
                        <version>3.0.5</version>
                    </requireMavenVersion>
                    <requireJavaVersion>
                        <version>1.8</version>
                    </requireJavaVersion>
                    <requireReleaseDeps>
                        <onlyWhenRelease>true</onlyWhenRelease>
                        <message>No Snapshots Allowed!</message>
                    </requireReleaseDeps>
                    <requireUpperBoundDeps>
                        <!-- 'uniqueVersions' (default:false) can be set to true if you want to compare the timestamped SNAPSHOTs  -->
                        <!-- <uniqueVersions>true</uniqueVersions> -->
                    </requireUpperBoundDeps>
                    <reactorModuleConvergence>
                        <message>The reactor is not valid</message>
                        <ignoreModuleDependencies>true</ignoreModuleDependencies>
                    </reactorModuleConvergence>
                    <requirePluginVersions>
                        <message>Best Practice is to always define plugin versions!</message>
                        <banLatest>true</banLatest>
                        <banRelease>true</banRelease>
                        <banSnapshots>true</banSnapshots>
                        <phases>clean,deploy,site</phases>
                        <additionalPlugins>
                            <additionalPlugin>org.apache.maven.plugins:maven-eclipse-plugin</additionalPlugin>
                            <additionalPlugin>org.apache.maven.plugins:maven-reactor-plugin</additionalPlugin>
                        </additionalPlugins>
                        <unCheckedPluginList>org.apache.maven.plugins:maven-enforcer-plugin,org.apache.maven.plugins:maven-idea-plugin</unCheckedPluginList>
                    </requirePluginVersions>
                    <enforceBytecodeVersion>
                        <maxJdkVersion>1.6</maxJdkVersion>
                        <excludes>
                            <exclude>org.mindrot:jbcrypt</exclude>
                        </excludes>
                    </enforceBytecodeVersion>
                    <banDuplicateClasses>
                        <ignoreClasses>
                            <!-- example of ignoring one specific class -->
                            <ignoreClass>com.xyz.i18n.Messages</ignoreClass>

                            <!-- example of ignoring with wildcards -->
                            <ignoreClass>org.apache.commons.logging.*</ignoreClass>
                        </ignoreClasses>
                        <findAllDuplicates>true</findAllDuplicates>
                    </banDuplicateClasses>
                    <banCircularDependencies/>
                    <ForbidOverridingManagedDependenciesRule>
                        <excludes>
                            <!-- guava in parent is too old, so allow to override it -->
                            <exclude>com.google.guava:guava</exclude>
                        </excludes>
                    </ForbidOverridingManagedDependenciesRule>
                    <ForbidOverridingManagedPluginsRule/>
                    <ForbidDependencyManagementInSubModulesRule/>
                    <ManageAllModulesRule/>
                </rules>
            </configuration>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>extra-enforcer-rules</artifactId>
            <version>1.0-beta-3</version>
        </dependency>
        <dependency>
            <groupId>net.oneandone.maven</groupId>
            <artifactId>ono-extra-enforcer-rules</artifactId>
            <version>0.1.1</version>
        </dependency>
    </dependencies>
</plugin>

Two attempts to find duplicated classes with Maven
Remove duplicate classes the agile way: Maven Duplicate Finder Plugin

Finding Duplicate Class Definitions Using Maven

Both of this plugins are discussed here:
Figuring out duplicate class definitions using the Analyze goal

Also maven-shade-plugin does check for overlapping classes during packaging of uber-jar.

Resolving conflicts using the dependency:tree -Dverbose
It shows which dependencies are duplicated (omitted for duplicate), with are evicted with newer version (version managed from 1.6) but it doesn’t show which dependencies was excluded.

Another one good thing that worst to do is enable Failing the build on dependency analysis warnings. Note: it binded to verify phase that runed after package.

JDEPS Java Dependency Analysis Tool from JDK 8

Also some related articles

Versions compitable

For example changelog of Joda-Time v2.9

Compatibility with 2.8

Build system — Yes
Binary compatible — Yes
Source compatible — Yes
Serialization compatible — Yes
Data compatible — Yes
— DateTimeZone data updated to version 2015g
Semantic compatible — Yes

See another [Linkset] Compatibility

Jigsaw

It is possible a situation when some two libraries wanting to use the same dependency but of different versions. Unfortunately, in this cases we can’t manage this and should use -nodep versions.
Finally this problem will be resolved in JDK 9 Jigsaw: a jar can be declared as a module and it will run in it’s own isolated class loader, that reads class files from other similar module class loaders in an OSGI sort of way.
This will allow multiple versions of the same Jar to coexist in the same application if needed.

Working with deprecation

Upgrading of dependncies may require to remove some old codebase that depends on them.
This is also should be done in right way, so here some links that may helps:
* JEP 277
* Dr. Deprecator Prescriptions: important things that you should know about obsolete Java API

Speed up maven build

It’s also related topic. The main reason why I decided to add it here is because usually during speeding up build you will find a lot of problems with dependency graph.
It will helps you to make yoir project more modulized. Also for example paralell build may fails if your tests are in conflict (shares same resources, for example integration tests may use the same port).

Dependency analyzers

Also useful

* jApiCmp japicmp is a tool to compare two versions of a jar archive
* Java API Compliance Checker: A Perl script that uses javap to compare two jar archives. This approach cannot compare annotations and you need to have Perl installed.
* Clirr: A tool written in Java that compares two libraries for binary compatibility. Tracking of API changes is implemented only partially, tracking of annotations is not supported. Development has stopped around 2005.
* JDiff: A Javadoc doclet that generates an HTML report of all API changes. The source code for both versions has to be available, the differences are not distinguished between binary incompatible or not. Comparison of annotations is not supported.
* revapi: An API analysis and change tracking tool that was started about the same time as japicmp. It ships with a maven plugin and an Ant task, but the maven plugin currently (version 0.4.1) only reports changes on the command line.

[Grails] ConfigObject

A popular question about ConfigSlurper:

Hey guys, I had in a project the following:
String variable = grailsApplication.config.grails.property.name + grailsApplication.config.grails.anotherproperty.name

And was working for almost a year but suddenly it stop working and throwing an error regarding the ConfigObject doesn’t have a plus method, did something similar happened to anyone?
Did someone knows why it was working and suddenly stop working ?

It doesn’t work because one of ‘property.name’ or ‘anotherproperty.name’ is not set.

If some option is not set, then ConfigSlurper return instance of ConfigObject.
That’s a common mistake.
Good news is that empty ConfigObject can be casted to false by the Groovy Truth.
Thus, to avoid this kind of mistakes and get null instead of ConfigObject you can use Elvis operator write something like:

// return empty list if supportedLocales isn't set
grailsApplication.config.supportedLocales ?: []

// return null if defaultLocale isn't set
grailsApplication.config.defaultLocale ?: null

Conditional Verbosity With Temporary Log Queues

I found a great advise in article Optimal Logging and it worth to mention separately:

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.

It’s cool idea, I’ll try in practice.
And Michael Würtinger created a SLF4J extension for doing it.