Reboot to Update

I have to say that — until now — I have been1 among the lucky ones that haven‘t been plagued by any weird iOS or iPadOS 13 bugs. But now, my iPad suddenly requires a restart prior to being able to update any app from Apple‘s app store.

That‘s right, I have to do the „press sleep and home2 at the same time until the bright apple logo shows up“ dance.

Every. Single. Time.

In hindsight, I‘m not sure really when the issue started to show up. It may just as well have been caused by the update to iPadOS 13.2.2. My iPhone3 in most cases, but not always, also requires a reboot before successfully managing to install updated apps.

What happens is that, after pressing the update button, the App Store app shows the spinners as an indication that the update is about to start. Then, the familiar circle (drawn with a thin line) with a square in the middle shows up. And then, nothing. The App Store app can keep this status all day, it I let it.

I tried updating apps individually, or in concert. I tried disconnecting the data communication, and then connect again before repeating my attempt to update. I tried touching the central square to abort the update procedure.

Nothing works, a restart is required in 100% of such cases. I haven’t read about this behavior or listened to people talk about it anywhere. But it’s there, and I guess it won’t leave me alone on its own.

Therefore, I sincerely hope that the next update down the road fixes the issue, because this really sucks.

[Update]: After a couple of days running iPadOS/iOS 13.2.3 it seems that the issue has indeed be fixed with a software update. I can update my apps just fine again.

  1. Even during the beta phase of iPadOS.
  2. My iPad is a 10.5” Pro model.
  3. Which is also on 13.2.2.


My introduction to the world of slide keyboards happened sometime after custom keyboards were allowed in iOS1. I tried some of the available keyboards and ended up using SwiftKey since then.

My experience with SwiftKey has been mostly positive.2 The rate of correctly interpreted gestures is not at a 100%, not even close. But it is close enough to keep me using the slide keyboard. My preferred style of work has been to slide entire sentences and then start editing the mistakes away. In total, this approach has still saved me some time3.

Given that history, I admit to have mixed feelings about Apples announcement of a slide keyboard named QuickPath in iOS/iPadOS 13. After all, SwiftKey and similar products have been in the market for many years. These guys got to have some advantage over a newcomer in the field, right?

During the beta, and after iOS/iPadOS 13 was released, I took my time taking the QuickPath keyboard for a spin. Given my confidence in SwiftKey, I wanted Apple’s implementation to have a fair chance. I admit that I was expecting it to fall behind the experienced competition. But I wanted to get a clear picture of the failure rate before I cast my verdict and return to SwiftKey.

The outcome of all the time I spend sliding text into my devices using the new keyboard is that I am more than impressed by the QuickPath’s capabilities. In comparison with the results I get from using SwiftKey, the quality of correctly interpreted gestures is significantly4 closer to 100%.

Given the quality of results I personally get out of using the keyboard, it is probaly safe to assume that QuickPath has secretly been years in the making. There is just no way that this is the result of the iteration of building a slide keyboard at Apple. In what otherwise would be called “typical Apple fashion”, at least this feature of iOS 13 shipped when it was ready5.

For most of the time, I still type longer texts6 using a hardware keyboard. But in the past, I always made sure to take my hardware keyboard for the iPad with me when traveling. This is no longer the case. Despite all the bad press that iOS 13 (to some extent) rightfully gets, this single feature is a roaring success in my books.

That said, does the release of QuickPath represent a sherlock-level event for SwiftKey and the like? Maybe. I personally struggle to find a relevant feature7 of SwiftKey that puts it ahead of QuickPath, but maybe that’s just me. Nevertheless, competing on a platform with the platform’s vendor admittedly is super-hard, but not impossible.

  1. If I remember correctly, this would have been around iOS 8.
  2. I never used to “cloud” features of Swiftkey.
  3. I have to admit that thumb-typing never clicked (no pun intended) for me. I understand that the yield can be phenomenal for those who really master it.
  4. Like … night and day. Using Swiftkey, I was always struggling to correctly slide two-letter words where the two letters are located next to each other on the keyboard. Example: the German word “zu”. Apple’s keyboard masters this particular challenge like a champ.
  5. Yes, I understand the irony of this statement with respect to many other features in iOS/iPadOS 13.
  6. like this one
  7. Yes, I know that SwiftKey has the ability to learn your writing style from your social network activities. But QuickPath can learn your writing style from everything you write on the device.
    And, yes, SwiftKeyprovides different keyboard skins. Personally, I am not the type of person who needs this kind of distraction and I always used the most neutral skin available.

The little joys of iOS 13

I can’t remember one feature being mentioned on stage during the iOS 13 introduction at WWDC: you can control the language of apps on iOS 13 individually in, provided that the app itself supports more than one language.

While I strongly prefer to use English as the primary language on operating system level1, there are some apps that I always wished I could run in my native German language.

But unfortunately, this was not possible in iOS 12 and before. Unless an app itself offered the ability to switch to a different language (and German was a choice) you were out of luck.

But again, in some cases it makes much more sense to run an app in German. A prominent example is the DB Navigator2 app by Deutsche Bahn, or any other public transport app of any German-speaking city that I regularly travel to. It just feels more … natural.

I’m very happy about this unheralded little nugget. It will certainly improve my iOS experience a lot.

  1. When I started using computers – for a long time – the choice was to either have English skills or stop using computers.
  2. Don’t worry, the English localization is much better that the proverbial English skills of the DB staff that people sometimes make fun of (albeit in a good-natured fashion).

The App that never sleeps

The other day, I picked my phone from the pocket and noticed that the Halide camera app was on. That’s right, I did not have to unlock the phone.

After using Halide and carelessly 1 putting the phone in my pocket, it stayed there for quite some time while Halide was obviously running in the foreground the entire time and so I ended up with a drained battery way too early in the day.

I tried to find information about why the phone wasn’t locked automatically while Halide was in the foreground. But I could not find anything specific. But at least I was able to successfully reproduce the behavior: Halide just wouldn’t trigger the auto-lock.

I tried other camera apps: Apple’s own keeps the phone unlocked for about five minutes and Obscura 2 implements an even shorter timeout for locking the phone. Neither app seems to have any preference2 setting that affects the activation of auto-lock after the globally configured time period.

Mysterious and potentially undocumented behavior aside, I don’t think I want a camera app on my phone that keeps the phone awake indefinitely. I can see the point of keeping the phone awake in the hunt for a perfect shot. But in my opinion, the risk of ending up with a completely drained battery entirely cancels out the utility of Halide’s vigilance.

Maybe the fo … tography is not strong enough in me.

  1. In the assumption that the phone would lock itself after the configured auto-lock period expired.
  2. Sure enough, the official iPhone User Guide available off of iOS’s Books app did not mention anything specific in the chapter about

Wishlist for Unread 2

While I am a big fan of Reeder I still use Unread more or less heavily for reading my RSS feeds1. The developer of Unread recently pre-announced the release of Unread 2 as the next evolution step of the popular feed reader.

Apparently, it is too early to publish any information about new features, but the announcement still got me started to think about what features I would personally want from the new release. Here’s a list:

  • Ability to filter for all and starred articles (in addition to the currently implemented new articles). Minimal UI, I know. But still, I’d find this very helpful.
  • Keyboard shortcuts. The support for keyboard shortcuts would add nothing to the UI and still be helpful in many cases.
  • Change fontsize in smaller increments, at least on the iPad. Currently the difference between two increments is the difference between too big and too small. It’s hard to hit the perfect size, especially on the iPad.
  • Readability view on a per-feed basis. This would keep the friction of using the app low because one tap is saved to switch to readability mode.
  • Administration of subscriptions. Add, rename, and remove feeds. Move to folders.

Of all of those wishes, I want the smaller increments in fontsize the most. I read most of my feeds on the iPad, and this factor would give me the most benefit out of a new version of Unread.

  1. Sometimes I prefer the versatility of Reeder and sometimes I’m more about a minimal UI to concentrate on the process of reading itself.

13 Minutes to the Moon

If you‘re in the market for a podcast recommendation, here it is: go listen to 13 Minutes to the Moon, produced by the BBC World Service. It‘s an in-depth walk-through of topics around the nearly 13 minute-long final descent1 of the Eagle lander from the Columbia command module down to the surface of the Moon.

I have read about, listened to, and watched tons of material about this expedition. But one thing I learned from listening to episode 9 of the series was that the landings in all cases have been expressly planned to happen in a region close to the terminator when the Moon was in a waxing phase2.

Thanks to the low position of the sun (in the back of the LEM) over the horizon, the overall amount of light was reduced and the structures on the surface cast long shadows. These create contrasting markers in the blinding whiteness to assist the LEM pilots in recognizing and avoiding potential obstacles that might be a hazard to the landing procedure.

In hindsight, it seems totally natural and obvious to plan the landings this way, but it never actually occurred to me until I listened to 13 Minutes to the Moon.

  1. That inspired the title of the podcast series
  2. This conclusion is also backed up by the flight path of the mission, see e.g this illustration.

Carrot Weather Data Source Reshuffle

Today’s release of weather app Carrot Weather comes with some interesting changes. For the first time, the app supports MeteoGroup as a data source.

I have been using the MeteoGroup’s own app WeatherPro for years. In my personal experience, the quality of prediction data for Europe is more accurate in comparison to other “global” data sources that are also supported by Carrot.

However, I don’t like the concept of presenting the data in WeatherPro very much and kept looking for replacements. Hello Weather is certainly a viable alternative, but for some reason it did not stick for long. At some point, I switched to Carrot, and my previous data source of choice (The Weather Channel) within Carrot delivered okay-ish results I could live with. Win-win, sort of.

Without any evidence for it to happen, I nevertheless kept up hope that the future with bring access to higher-quality data, specifically the data that power WeatherPro.

According to Carrot’s release notes of today’s release, The Weather Channel terminated the contract because it does not want to provide data to competing apps any longer in order to get1 more users to switch to their own apps. Here’s hope that MeteoGroup does not come to a similar conclusion any time soon.

  1. For reasons I could only speculate about.

First Men on the Moon

This is not a new thing, but fits perfectly to the Apollo 11 buzz (no pun intended) that we are going to get through in the coming month.

Make sure to go to and replay the final descent of the landing module along with synchronised communication in the mission control room in Houston and between CAPCOM, Columbia, and Eagle.

SwiftUI Example

I came across this video on Twitter. The video demonstrates a non-trivial, but still not overly complicated example of a SwiftUI declaration.

I have to say that, after looking at the SwiftUI declarations I’m actually undecided whether my concerns voiced in this article are warranted or not. Yes, the chained expressions are sort of a mess and it remains to be seen whether such code is maintainable.

On the other hand, the presentation of the relevant information is not as obscured as I feared it would be. In other words, by looking at the code it is in my opinion possible to understand what’s happening.


I very much like the idea of a declarative definition of a user interface and thus I’m motivated to kick SwiftUI’s tires. Having worked halfway through the tutorials, I’m a little bit concerned about the scalability of SwiftUI. Sure, a DSL is always going to win the elevator pitch because it looks so nice and elegant.

At least the WWDC videos about SwiftUI that I have watched so far restrict themselves to more or less the bare minimum of complexity that you might want to add to the declarative definition of an app’s user interface. And already in the simple cases SwiftUI starts to get messy, e.g. with respect to formatting chained expressions1.

My (probably not very popular) point is that I personally believe that defining a scalable XML-based format will way more likely yield a good result than designing a DSL for the same purpose because scalability is already baked into XML itself. You can wrap tags around tags around tags real simple and the resulting complexity ist still kept under control.

In many cases, the problem is that the design of a DSL will start with the simple and elegant cases and stay with the simple and elegant cases for some time – until it needs to expand towards supporting higher-level complexity2. But: if the need for supporting higher-level complexity hasn’t been considered from the start than the DSL will fall apart pretty quickly.

To drive my point home, I have actually done some work in declarative UI definition in the Windows world, specifically the Windows Presentation Foundation (WPF)3. Microsoft uses a dialect of XML named XAML for the UI declaration.

I fully understand that XAML in particular has lots of problems and isn’t as much fun as you might want it to be. But still, given the choice between declaring a UI in an XML dialect or else by means of using a DSL (like SwiftUI) I would personally actually very likely prefer the XML.

  1. That does not even include the point where suddenly imperative paradigms are mixed into the declarative language.
  2. Which – let’s face it – it will inevitably have to.
  3. Yes, in a text editor. There is a graphical frontend for XAML. My experiences with the graphical frontend are staggering and I have yet to come across anyone seriously endorsing it.