Android applications today run what I like to call a "hive mind" where they optimise themselves depending on how users are using the app across arbitrary devices.

A thread on Android’s Runtime optimisation which have led to this
For starters, Android is built using linux and all applications run inside a linux process. Each process is a fork of Zygote which is the first process created by Android when the device boots up(it also takes care of loading common frameworks).
Android is a multi process OS where each process runs isolated to each other with their own private space. Avoids whole OS going down if any random process crashes 😅
All the Kotlin/Java code we write gets compiled into the .dex file and this has happened since Android applications have existed. The dex format hasn’t changed much in last 10+ years(talk about backward compatibility)
However, execution of .dex files has changed a lot. Android’s Dalvik VM was based on JVM with the exception that .dex is a register based format instead of the stack format which traditional JVM uses. This allows .dex to be smaller and most optimised for mobile devices.
Before Android 5.0, Dalvik Runtime used tactics similar to traditional VMs to optimise runtime - Just In Time compilation. The runtime figures out which flow of code is executed repeatedly and converts it into native code.
Why native code? dex instructions are machine independent and can run on any phone but these are then converted to machine instructions depending on architecture of SOCs(x86, 32/64bit etc). Hence executing dex instructions is always slower as compared to machine instructions.
JIT sits on top of the application process, continuously profiling flows of code which are becoming hot and converting them to machine code. Only caveat is, this information is lost when the application process is killed.
But there must be better options for converting machine code? For example, Apple Store recompiles all code on the server before delivering them on iOS devices but Android is way too fragmented to do this and OEMs have the ability to override runtime as well.
Starting from Android 5.0, ART converts all .dex code to .oat machine code during installation of application. This is great for application cold start since you don’t need a VM to execute machine code and no profiling is needed too.
Downsides? Sure 🙃 Converting .dex code to .oat is a heavy process. If you had a device which updated from 4.4 to 5.0, you might have noticed install times increased by a lot. This also increased boot times because applications were often re-optimised on boot.
.oat files can become ~8 times larger than .dex files which means applications now also took way more space than they used to.
This was changed with Android 7.0 where install time AOT compilation was completely removed. Instead JIT made a surprise return with an extra new feature, it could now persist all the profiles it created which were used to convert dex code to machine code in background.
Currently all code flow profiles are uploaded to Google Play and served to supported devices, which means if a large amount of users launched a screen on your app, for the next set of users this screen will be loaded faster because runtime already knows what to optimise.
Glossary
Dalvik: Android's runtime till 4.4(Also a village in Iceland)
JIT: Just in time compilation
ART: Android Runtime(duh) after 5.0
dex: Dalvik executable
oat: Native file format for ART(for full form inspect next line).
AOT: Ahead of Time Compilation
You can follow @_jitinsharma.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.