Refresh Android Debug Bridge Keys

I’m not an expert about the Android Debug Bridge (adb) at all, however this issue cost me a couple of hours to resove the other day so I thought I would write this to remind myself if it happens again.

rsa key fingerprint
RSA key fingerprint

What is adb

The Android Debug Bridge is a command line tool that lets you interact with both virtual and physical android devices.

For example you can press the power button on the device via the command line.

For more information on adb, check out the Android docs.

What was my problem

When trying to attach a phone to debug via USB I wasn’t getting the prompt to trust the computer’s RSA key fingerprint. This meant that the device was not being authorized in adb. As such, debugging and running Espresso tests was impossible.

After trying the usual switch things off and on again and trying a different wire I was still having no luck.

How I fixed it

I read that if the RSA key for adb is not correct then the device will not be able to trust the computer and won’t connect.

Regenerating the keys is easy by following the following steps

Delete the old keys

For Macs the keys are found in;

~/.android/adbkey
~/.android/adbkey.pub

On Windows they are found;

C:\Users\{username}\.android\adbkey
C:\Users\{username}\.android\adbkey.pub

Call the adb commands

If you don’t have adb added to your path, you will need to call adb from the right location;

For Macs;

~/Library/Android/sdk/platform-tools

For Windows;

C:\Users\{username}\AppData\Local\Android\sdk\platform-tools

Once the keys are deleted, calling the below commands will stop the adb server, restart it, and regenerate the keys.

adb kill-server
adb devices

Mockito example

Yet another post about testing.

Recently I’ve been working mainly on Android projects using Java so I decided I needed to learn how to write tests in Java. For a mocking framework, Mockito seemed to be popular. I’m most familiar with NSubstitute so the syntax of Mockito seemed pretty strange at first, so before I forget it all I’ve made myself a cheat sheet.

Dependency

To start using Mockito, you need to add it in as a test dependency to your gradle script.

dependencies {
  testCompile "org.mockito:mockito-core:2.8.47"
}

Further details on Mockito can be found here.

RunWith

To help keep your tests clean, use the Mockito test runner annotation with your test class;

@RunWith(MockitoJUnitRunner.class)
public class MockitoExampleTest {

}

Further details on the test runner can be found here.

Test

Creating a test is pretty straight forward; you just need to add the Test annotation to method within the test class.

@Test
public void MyTest(){
    
}

mock

To create a mock, there is the static mock method.

IExampleInterface exampleInterface = mock(IExampleInterface.class);

when

If you need to setup the mock to return values there is the static when method. You first specify which method you are setting up then use thenReturn to set which value you want to return.

int expectedInt = 1;
when(exampleInterface.GetNumber()).thenReturn(expectedInt);

When the method being setup has parameters, you can use exact values as the expected parameter.

String expectedString = "a string";
int inputNumber = 2;

when(exampleInterface.getString(inputNumber)).thenReturn(expectedString);

If you don’t need to be exact with the expected parameter, you can use the Mockito matchers. For example;

String expectedString = "a string";
when(exampleInterface.getString(anyInt())).thenReturn(expectedString);

Assert

Mockito plays nicely with JUnit assertions, so you can just use the standard JUnit assertions. For example;

ExampleClass exampleClass = new ExampleClass(exampleInterface);
String result = exampleClass.getResult(inputNumber);

Assert.assertEquals(expectedString, result);

verify

Verify is used to assert against the methods on a mock. You can check if a certain method was called, how many times, and with what parameters.

verify(exampleInterface, times(1)).getString(anyInt());

Captor

Captors can be use to capture the values that are passed as arguments to your mocks for further assertions. I have found them to be pretty useful for testing callbacks as you can capture the callback value, then call it in your test.

For example, you can use the Captor annotation and create a field in your test class for the captor.

@Captor
ArgumentCaptor<IHandleCallbacks> argCaptor;

Then using verify and .capture(), you can get the argument that has been passed to a mock.

verify(callbackClass, times(1)).doSomething(stringCaptor.capture(), argCaptor.capture());

Using getValue(), you can then get the value from the captor to either verify directly or in the case of a callback, call the method and assert againt what you expect to happen once the callback is called.

IHandleCallbacks callback = argCaptor.getValue();
callback.handle(1);

Example

I’ve made a small example repository on GitHub for reference.

Delivering Value

Delivering value in small, consistent amounts is an important feature of working in a lean manner and enabling a good feedback loop to ensure you are delivering the most value possible. However, I was taught a lesson recently in thinking about my approach and not thinking one approach will fit every situation.

Continuous delivery

Continuous delivery is where you aim to always keep your software in a deployable state, have the ability to complete automated deployments easily, and release small value changes often rather than large infrequent deployments. I’m a huge believer in the benefits of continuous delivery and it’s something I always try and work towards whatever project I am working on.

Continuous delivery is a topic in itself so I’m not going to go in to too much detail here; Martin Fowler goes in to further detail here.

What is value?

Value can mean different things to different people.

Gaining value from releases can be in the form of increased conversion, learning more about your market, or performance improvements to name just a few.

These are all valuable outcomes of releasing software, however it is important to keep in mind the actual goal of the software. If you haven’t read The Goal you should definitely add it to your reading list. It warns of focusing on efficiencies in the wrong parts of your system. For example spending time making one part of your process super efficient when you have a bottle neck further down the production line is effort wasted. Similarly having a focus on the reduction in waste may seem like value added; however if that reduction in waste has a negative impact on your overall output it may actually be reducing value overall.

My lesson learned

For a while now I’ve been working mainly on web projects where continously delivering tiny iterations can work brilliantly. Recently however I’ve been working on some native app projects. My immediate reaction was to “ship all the thingz” but I was reminded that native apps are not the web. A release will not mean refreshing the browser for a user but having to install an update. Not many users would want to update an app several times a day.

This is what made me start thinking of how the value is generated. In this instance value is generated from users using the app. I was too focused on releasing the software quickly, because something is not done until it is live. However, in my push for continuous delivery I was in danger of forgetting about where the real value came from. I was trying to optimise the wrong part of the system.

Examples

After the revelation of getting a lots of tiny incremental updates out super fast isn’t alway sthe best approach I thought I would have a look at how often some of the apps I use release updates.

Below are the average, maximum, and minimum number of days between the last 10 deployments for Netflix, Spotify and Twitter.

Days between releases Netflix Spotify Twitter
Average 3.5 4.4 6.5
Max 11 11 8
Min 1 1 3

The data for this was collected from APK Mirror.

These release times seem short and appear to contradict my eariler that delivering small incremental amounts doesn’t work too well when not working on a web project. These times are short and I’m sure there are many teams out there who would be proud to be averaging a release every 4 days.

However, when compared to some of Netflix’s deployment pipelines, 4 days is very long. For example, here it shows that in some instances for Netflix, it takes 16 minutes from code check in to live deployment.

Don’t get too stuck in your ways

This was a really welcome lesson of not getting too stuck in your ways and “this is the way things should be done”.

Every situation is different and you always need to be looking at how value is generated and how it can be maximised. So in my situation, I guess it’s important to find that balance between deploying quickly and efficiently while still making the end users happy by delivering updates that they will find valuable.

ES6 Mocha Snippets

ES6 Mocha Snippets is a great Visual Studio Code extension to help speed up writing your unit tests. Given how important unit testing is, anything that speeds it up good in my opinion.

Installation

Installation is super straightforward through either the extensions bar or Visual Studio Code quick open command.

Browsing and installing extensions is easy in Visual Studio Code. Just bring up the extensions bar then search for “ES6 Mocha Snippets” and hit install.

Alternatively if you already know what extension you want to install, use the quick open shortcut (⌘+P for Mac, Ctrl+P for Windows and Linux) then enter “ext install {name of snippet}”.

So for this extension it would be;

ext install es6-mocha-snippets

Usage

This extension provides a range of snippets for ES6 Mocha tests, such as for “describe”, “it” and “before”

es6 snippet
How to select a snippet

To use, start typing the desired snippet, use the arrow keys to select the correct snippet, then tab or enter to complete the snippet.

es6 describe
How the snippet is shown

A great extension to help speed up writing those all important tests! Check out the market place page for it here.

If you need a refresher, here is my introduction to Javascript unit testing using Mocha.

Crontab Basic Example

After having recently moved to a Mac from a Windows laptop, the loss of many features that I took for granted has been daunting. Losing the Task Scheduler is one such example. However, we have Crontab to the rescue.

Crontab

A cron job is just a command to run and the schedule that it should be run on.

Crontab is just a collection of those cron jobs. So it does appear we are losing a lot of functionality if you compare it to Task Scheduler, such as having triggers that are not based around time only. I thought the loss of such features and the GUI would be a hinderance to crontab, but so far it really hasn’t been. I don’t think I’ve ever made a scheduled task that wasn’t based on a timed schedule and the GUI is usually massively unresponsive, especially when remoting on to a server.

-l

To look at what cron jobs are currently set up you use the command;

crontab -l

This will list out all of the commands and their respective schedules.

-e

In order to edit the crontab use;

crontab -e

This will open the crontab file for editing; for me this opens in Vim by default.

To make a chang in Vim, press “I” to put it in “Insert” mode, make your changes then exit by first pressing escape to leave “Insert” mode followed by “:wq” to write and quit.

cron schedule

A cron schedule is made up from 5 time parts to set the schedule. Minute, hour, day of month, month, and day of week.

* * * * *
minute hour day of month month day of week

For example, to run a command at 2:30pm every day the schedule would be;

30 14 * * *

To have a command run every 10 minutes;

*/10 * * * *

To run a command at quarter past and quarter to every hour;

15,45 * * * *

crontab.guru is a great site where you can test out your schedule.