We all tackle the performance issues all the time. To measure the performance, we usually need tp log timestamp before and after the code block. But actually, the result may vary from time to time, since there are still apps running at the same time. They all gonna consume the CPU cycle, the memory and the resource that your code may need, which will make the result not very correspond to the real situation and will misguide you.
Actually, Google already has a tool, Benchmark, to benchmark the performance for everything in the framework. Now, Google makes it can be used without root access to the production devices, which will be useful when the developer tries to find the glitch in the code.
You can check the video for more detail before jump into continue the rest part.
Setup
Check the official document, or just download the samples from the following links:
For short, to run Benchmark, at least you need:
- Android studio 3.4+ for the samples, or 3.5 if you want to setup the Benchmark module easily on your own project.
- Project which is already integrated with AndroidX
Play
Let’s take an example from BitmapBenchmark in android-performance:
/** |
The test will check how long will the connected device takes to do the pixel fetching. Before you click run, make sure you plug-in a real device. Then you will get the following message if you do everything right:
Started running tests |
Thins to consider
According to the speaker in Google I/O session above, you need to take the following into consideration before benchmarking everything:
- Don’t benchmark everything: start with Systrace, identify the slow part of your code, then use Benchmark to measure.
- Measure for synchronous block: Because it’s more straight forward and stable to be measured.
- Small block: the target code block should better run perhaps within 50ms.
- Hot code: Benchmark will run the code in loops, means it might get JITed. If the code only run once at runtime, the result may not that accurate. it’s better to benchmark the code actually run in a loop at runtime.
- Cache: the result of your code may get cached by you, library, or the System, which will influence the benchmark**.**
- Don’t use parameterized: try to use the real data in real world, not just like unit test.
- DO NOT COMPARE DEVICES: Benchmark is only designed for performance on one device over time.
- Run locally, not in CI: Benchmark means you need to do some changes, so you don’t need to benchmark after you finish the improvement. But it’s also no harm to monitor it in CI for each build.
Troubleshoot
Benchmark also provides warnings to guide you along the way to set everything right. Like you will get the following warning if you forget to plug in a device before running:
Started running tests |