Yearly Archives

5 Articles
Implementing the new ambient second hand for Wear OS with Watch Face Decompositions

Implementing the new ambient second hand for Wear OS with Watch Face Decompositions

The Wear OS team seems to have become more and more negligent about releasing resources for developers. An update to the emulator with the new UI and H update was released a week after some watches got the update.

Now with the new Qualcom 3100 processor with the ability to have constant animations in ambient mode and up to 16 colors has been out for two months, there has still been no documentation on how to add this functionality to watch faces.

Finding the source

I originally tried to add this functionality by continuing to draw the second hand in ambient mode and increasing frame rate, but this used up tons of battery and I could tell this was not the way to go.

I decided to look at the WatchFaceService decompiled class and found a function that was not documented called updateDecomposition that receives a WatchFaceDecomposition object.

 What is Watch Face Decomposition?

There is another class called WatchFaceDecomposition that has a builder with several component types. Think of it kind of like how you would build a watch face in Facer if you have used that before. Different components are passed to the builder such as fonts, text, and images with different ids and z-indexes. There are several Base Types that can also be set to animate. These are as follows.


The different components that can be added are:

  • Image Component: This can be used as a background or icon and can be set to rotate with one of the above hand presets and must have an Icon Drawable set
  • Font Component: Font components are set from a source image Icon Drawable with all the digits that will be used on an image from top to bottom and the number of digits in the image must also be supplied
  • Number Component: The font component can be set to a number component which can update using the same presets in the list above
  • Complication Component: All complications that should be visible can be added as a complication component. I haven’t had a chance to try this yet, but it seems pretty straight forward.
An example of the ambient second hand. I created a custom goals watch face based on the watch face that came with the Fossil Sport. I will release this if I ever get approved by the Play Store 🙁

How to Add to your watchface

Here is an example of the function I created to make a WatchFaceDecomposition:

private WatchFaceDecomposition createDecompositionWatchFace() {
ImageComponent ticksComponent = new ImageComponent.Builder()
.setImage(Icon.createWithBitmap(BitmapUtil.loadBitmap(getBaseContext(), “ticks_embossed_decomposable.png”)))
.setBounds(new RectF(0, 0, mCenterX * 2, mCenterY * 2))

ImageComponent hourHandComponent = new ImageComponent.Builder(HOUR_HAND)
.setImage(Icon.createWithBitmap(BitmapUtil.loadBitmap(getBaseContext(), “hour_hand_decomposable.png”)))
.setBounds(new RectF(0.46f, .23f, 0.5f, .75f))

ImageComponent minuteHandComponent = new ImageComponent.Builder(MINUTE_HAND)
.setImage(Icon.createWithBitmap(BitmapUtil.loadBitmap(getBaseContext(), “minute_hand_decomposable.png”)))
.setBounds(new RectF(0.455f, 0f, 0.5f, .75f))

ImageComponent secondHandComponent = new ImageComponent.Builder(TICKING_SECOND_HAND)
.setImage(Icon.createWithBitmap(BitmapUtil.loadBitmap(getBaseContext(), “second_hand_decomposable.png”)))
.setBounds(new RectF(0.46f, .03f, 0.5f, .75f))

return new WatchFaceDecomposition.Builder()

Now actually telling the watch face to display this is another matter, but pretty simple once you know how.

First, at the end of the onCreate method of the watch face engine, run the updateDecomposition function we mentioned earlier:


Next we need to make a receiver class within the engine to update the decomposition occasionally. Running this function seems to black out the screen for a second so this should only be done when one of the components needs to be updated. The new processor seems to handle things like animation.

private class UpdateDecompositionReceiver extends BroadcastReceiver {
public void onReceive(Context context, Intent intent) {

Now in the watch face class outside the engine, we need to create an alarm that will wake up the watch face and re-build the decomposition watch face. We will start this when the watch face is created and stop it when destroyed

private AlarmManager alarmManager;
private PendingIntent decompositionPendingIntent;

private void startDecompositionAlarm() {
this.alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE);
decompositionPendingIntent = PendingIntent.getBroadcast(getBaseContext(), 0,
new Intent(getBaseContext(), Hubcaps.Engine.UpdateDecompositionReceiver.class), 0);
SystemClock.elapsedRealtime() + 3600000L,
3600000L, decompositionPendingIntent);

public void onCreate() {

public void onDestroy() {

All done! This is a simple analog watch face I created for my Hubcaps watch face that you can see here if you have either the Montblanc Summit 2 or Fossil Sport.

Hubcaps Preview

I have already bugged lots of people for some official documentation, but in the meantime, I hope this helps. Look forward to seeing a lot of developers using this in their watch faces!

Heads up for new Fall 2018 Wear OS

Heads up for new Fall 2018 Wear OS

In mid August, Google announced that they would be releasing an updated user interface. It has already begun to roll out, but there is still a lot of information that is unknown and not told to developers and not available in a preview. Luckily, the Reddit user ntauthy was able to find a way to manually enable these existing features on some versions of Wear OS. I’m going to go over the changes that were announced for developers as well as the ones that have not been.

The following were announced by Hoi Lam on the G+ Wear OS Developers commnunity

More Concise Text

Notifications now show a smaller amount of text initially, but users can still expand the notification by tapping on it.

Set custom colors for notifications

Brand Colors

Notifications can show any color you set using the .setColor function in your notification’s builder.

No More Custom Layouts

Tapping on a notification will no longer bring up a custom layout for expanded content, but rather show the full text of a notification with additional actions.

The following changes where seen by me on the new version by manually enabling feature flags. Allow me to make the disclaimer that these are not confirmed for the official release, but are withing the Wear OS app.

Now Playing status in the quick settings panel

Quick Settings Controls

The current media session from your watch or paired device now displays in the top status bar showing the current title. A Play/Pause transport control is available. Tapping on title triggers the Content Intent of the Media Style notification if it was posted from the watch, or brings up the Media Controls activity if not.

The Media Controls Activity

Media Controls in the app drawer and open

One thing that surprised me was a new Activity available in the app drawer that brings up extended controls for the current Media Session. It displays transport controls with the volume slider and the current title. The album art displays full screen behind it with the time at the top. After enabling it, I later got a notification telling me that this activity would automatically be launched when I started playing something, but this could be turned off by long pressing on the screen while it was open. This seems to be to replace the full screen controls previously available, but allows users to disable it if they don’t like it taking up their whole screen.

The Content intent on new notifications

While in the old version tapping on a notification triggered the content intent if one was set, this is not the case for the new one. Instead, the content intent shows up as another action labeled “Open”. 

No more progress bar

New notification with no progress bar
Old notification with progress bar

With the old version, a circular progress bar could be shown around the notifications icon. This does not appear to be the case in the new version, so developers should not depend on it for showing progress.

Let me know if If am missing anything or if you have any questions.

Getting User Input on Wear OS – Part 2 – Keyboard

Getting User Input on Wear OS – Part 2 – Keyboard

In the previous post, I explained how I set up a “Get Input” fragment for my apps with a voice or keyboard option and how to get the voice input. In this post I will explain how to get input from the keyboard.

I’ve seen some apps use the RemoteInput API, but this feels jarring to me since it seems to take you out of the app. Of course, they or I could be doing this wrong. The only other way I have found in the documentation is by using the EditText. However, you may not want the user to have to select the text field, so the method I have is more similar to the Play Store where selecting the keyboard button takes you directly to the keyboard.

We will first create a fragment with just a EditText view. Wear OS will open the keyboard when the text box is selected. To achieve this, we are going to open the fragment and programatically select the EditText view.

You will want to change the IME type of the view to whatever best suits your situation. In this example, we are using it as a search. Be sure to change it in the code and the layout, otherwise the types will not match and you will not get an input back.

Here is the layout for the KeyboardInputFragment:

<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android=""

        android:visibility="visible" />

Notice the whole layout is set to Gone. The keyboard will take up the whole screen, so we do not need to show anything. Here is the code for the KeyboardInputFragment:

public class KeyboardInputFragment extends Fragment {

    private GetInputFragment getInputFragment;

    public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, Bundle savedInstanceState) {
        View view = inflater.inflate(R.layout.fragment_keyboard_input, container, false);
        final KeyboardInputFragment keyboardInputFragment = this;
        final EditText editText = view.findViewById(;

        editText.setOnEditorActionListener(new TextView.OnEditorActionListener() {
            public boolean onEditorAction(TextView textView, int actionId, KeyEvent keyEvent) {
                boolean handled = false;
                if(actionId == EditorInfo.IME_ACTION_SEARCH) {
                    handled = true;
                return handled;

        return view;

    public void setGetInputFragment(GetInputFragment getInputFragment) {
        this.getInputFragment = getInputFragment;

    private void showSoftKeyboard(View view) {
        if(view.requestFocus()) {
            InputMethodManager imm = (InputMethodManager) getContext().getSystemService(Context.INPUT_METHOD_SERVICE);
            if(imm != null) {
                imm.showSoftInput(view, InputMethodManager.SHOW_IMPLICIT);
                imm.toggleSoftInput(0, 0);
            } else {
                Cat.e("Couldn't open keyboard");

The showSoftKeyboard function is what selects the EditText view to open the keyboard. The setGetInputFragment passes an instance of the previous fragment so that we can send the text back after we got something from the user. After we have sent the information back, we can close the fragment and the fragment from the previous post will have the necessary text to complete the action.

Hope this helps! You could probably do this in a separate activity and send the result back in an intent. I also thought extending a popup window might also work. Or am I using the Remote API wrong? I’d be interested in hearing some other solutions!

Getting user input on Wear OS – Part 1 – Voice

Google Play's input screen

Google Play’s input screen

When I first began programming for Wear OS I was expecting an easy out of the box method for getting user input similar to how Google Play works. Giving the user the option to speak or type and also a list of canned inputs to choose from.

Unfortunately, this isn’t as easy as I thought and the developer page seemed pretty vague and unhelpful, but with a little work I was able to come up with a pretty small script to accomplish what I needed to do. I will explain the basics of this in the following post.

To begin, create a layout that looks something like this:

Component tree







I use a Dismiss Layout as the wrapper so the user can swipe the input away if they change there mind and a Box Inset layout to make sure it is readable on any screen inside that. A scroll view isn’t necessary if you aren’t going to include any canned responses. I also include a microphone button and a keyboard button.

I use a Fragment class to include all the code for the input. Here’s the basic code for getting the input from the mic:

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
    Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    // Start the activity, the intent will be populated with the speech text
    startActivityForResult(intent, SPEECH_REQUEST_CODE);

this is the function to run when the user clicks on the voice input button. It starts the speech recognizer intent a private integer variable can be used for the SPEECH_REQUEST_CODE so we can get the result back later by overriding onActivityResult.

public void onActivityResult(int requestCode, int resultCode, Intent data) {
    Cat.d("Got Request Code " + SPEECH_REQUEST_CODE + "  " + requestCode);
    Cat.d("Got Result Code " + resultCode + " " + RESULT_OK);
    if (requestCode == SPEECH_REQUEST_CODE) {
        try {
            List<String> results = data.getStringArrayListExtra(
            String spokenText = results.get(0);
        } catch (NullPointerException ex) {
            Cat.d("No result received");
            // If they are using android wear 1.x, back out now
            if(Build.VERSION.SDK_INT == Build.VERSION_CODES.M) {
    super.onActivityResult(requestCode, resultCode, data);

This function runs when a result comes back. We need to check to make sure the the code we sent matches then get the first result. There is also the possibility that there won’t be a result. If this is the case, we either do nothing or just remove the fragment if it is android wear 1 since there is not keyboard input.

In the next part, I will explain an easy way to get keyboard input without leaving the app.

Getting colors like Android’s media style notifications

Getting colors like Android’s media style notifications

Android's media style notification

Android’s media style notification

Update There is a new library by Mateusz Kaflowski who was kind enough to post it in the comments. The tutorial below gives you information on developing your own color fetching script and how to tweak it to your liking, but if you want to make sure the colors match the notification exactly, I recommend you use his. More info here.

I was recently writing an Audiobook app and wanted to get dynamic colors based on the book cover similar to Android’s media style notifications. Obviously the best method is to use the Palette API, but implementing it for the most aesthetically pleasing colors with good contrast wasn’t immediately clear.

Phonograph Screenshot

Phonograph’s colors with white buttons and text

Some popular apps like Pocketcasts for Phonograph seem to get the most vibrant color they can find from the art and then darken it so the white buttons and text will have enough contrast. I used this method for previous apps since it was easy but this method bothered me since it was possible that the primary color wouldn’t match very well and often clashed with the notification.

To better match Android’s notifications we need to also fetch a complementary color from the art that has enough contrast for good readability.

To start, let’s generate a palette:

public static Palette generatePalette(Bitmap bitmap) {
    if (bitmap == null) return null;
    Palette palette = new Palette.Builder(bitmap)
    if(palette.getSwatches().size() <= 1) {
        palette = new Palette.Builder(bitmap)
    return palette;

There are a couple things to note here. By default, the palette builder has filters that disable colors that are close to black or white or certain hues that might cause accessibility issues for color blind folks like me. The second part in green will clear all filters in case these are the only colors that can be found. You can even add your own filters using addFilter.

Next, we need to be able to easily figure out the distance between two LAB colors. We can do this easily with ColorUtils.

private static double calculateDistance(int color0, int color1) {
    double[] lab0 = new double[3];
    double[] lab1 = new double[3];

    ColorUtils.colorToLAB(color0, lab0);
    ColorUtils.colorToLAB(color1, lab1);

    return ColorUtils.distanceEuclidean(lab0, lab1);

Now we need a way to compare different swatches to find the swatch with the greatest distance.

private static class distanceComparator implements Comparator<Palette.Swatch> {
    private int color;
    private distanceComparator(int color) {
        this.color = color;

    public int compare(Palette.Swatch swatch1, Palette.Swatch swatch2) {
        return (int)(calculateDistance(swatch2.getRgb(), color)
                - calculateDistance(swatch1.getRgb(), color));

Okay. Now everything is set up to get the colors.

public static int[] getMatchingColors(Palette palette, int[]fallback, boolean invert) {
    if (palette != null) {
        if(palette.getDominantSwatch() != null) {
            Palette.Swatch swatch = palette.getDominantSwatch();
            int swatchColor = swatch.getRgb();
            int matchingColor = -1;
            List<Palette.Swatch> swatches = new ArrayList<>(palette.getSwatches());
            Collections.sort(swatches, new distanceComparator(swatchColor));
            for(Palette.Swatch contrastingSwatch : swatches) {
                Cat.d("Testing contrast to " + ColorUtils.calculateContrast(contrastingSwatch.getRgb(), swatchColor));
                if(ColorUtils.calculateContrast(contrastingSwatch.getRgb(), swatchColor) > MIN_ALPHA_CONTRAST) {
                    matchingColor = swatches.get(0).getRgb();
            if(matchingColor == -1)
                matchingColor = swatch.getBodyTextColor();

                return new int[]{matchingColor, swatchColor};
            return new int[]{swatchColor, matchingColor};
    return fallback;

Some explanation here. getDominantSwatch() gets the swatch generated that appears the most in the bitmap and is close to what notifications use for the background.

Next we can get a list of all the swatches that were generated in the palette and use the comparator we made earlier to sort them by the greatest distance. We can then loop through them making sure the swatch also has a high enough contrast for readability. More information can be found in the Material guidelines for this.

If no matching color can be found, we can use getBodyTextColor() to get a guaranteed contrasting color. (this will either be white or black with some alpha)

I also added an invert option to my function in case a user wants to invert the colors for their audiobook to make it lighter or darker.

NavBooks Screenshot

The completed result in NavBooks.

The result gives you colors that are generally very close to the notification, but may still not match up perfectly. I think there are a couple things that may factor into the difference such as the bitmap being resized. I suspect the notification may also be getting the dominant color from the left side of the image for the  gradient it uses. You can try to get these to match better if you want, but I think the colors generated with this method will get as good results.

Do you have any other methods for generating colors that you use?