The other day, I picked my phone from the pocket and noticed that the Halide camera app was on. That’s right, I did not have to unlock the phone.
After using Halide and carelessly putting the phone in my pocket, it stayed there for quite some time while Halide was obviously running in the foreground the entire time and so I ended up with a drained battery way too early in the day.
I tried to find information about why the phone wasn’t locked automatically while Halide was in the foreground. But I could not find anything specific. But at least I was able to successfully reproduce the behavior: Halide just wouldn’t trigger the auto-lock.
I tried other camera apps: Apple’s own Camera.app keeps the phone unlocked for about five minutes and Obscura 2 implements an even shorter timeout for locking the phone. Neither app seems to have any preference setting that affects the activation of auto-lock after the globally configured time period.
Mysterious and potentially undocumented behavior aside, I don’t think I want a camera app on my phone that keeps the phone awake indefinitely. I can see the point of keeping the phone awake in the hunt for a perfect shot. But in my opinion, the risk of ending up with a completely drained battery entirely cancels out the utility of Halide’s vigilance.
Maybe the fo … tography is not strong enough in me.
- In the assumption that the phone would lock itself after the configured auto-lock period expired. ↩
- Sure enough, the official iPhone User Guide available off of iOS’s Books app did not mention anything specific in the chapter about Camera.app. ↩
While I am a big fan of Reeder I still use Unread more or less heavily for reading my RSS feeds. The developer of Unread recently pre-announced the release of Unread 2 as the next evolution step of the popular feed reader.
Apparently, it is too early to publish any information about new features, but the announcement still got me started to think about what features I would personally want from the new release. Here’s a list:
- Ability to filter for all and starred articles (in addition to the currently implemented new articles). Minimal UI, I know. But still, I’d find this very helpful.
- Keyboard shortcuts. The support for keyboard shortcuts would add nothing to the UI and still be helpful in many cases.
- Change fontsize in smaller increments, at least on the iPad. Currently the difference between two increments is the difference between too big and too small. It’s hard to hit the perfect size, especially on the iPad.
- Readability view on a per-feed basis. This would keep the friction of using the app low because one tap is saved to switch to readability mode.
- Administration of subscriptions. Add, rename, and remove feeds. Move to folders.
Of all of those wishes, I want the smaller increments in fontsize the most. I read most of my feeds on the iPad, and this factor would give me the most benefit out of a new version of Unread.
- Sometimes I prefer the versatility of Reeder and sometimes I’m more about a minimal UI to concentrate on the process of reading itself. ↩
If you‘re in the market for a podcast recommendation, here it is: go listen to 13 Minutes to the Moon, produced by the BBC World Service. It‘s an in-depth walk-through of topics around the nearly 13 minute-long final descent of the Eagle lander from the Columbia command module down to the surface of the Moon.
I have read about, listened to, and watched tons of material about this expedition. But one thing I learned from listening to episode 9 of the series was that the landings in all cases have been expressly planned to happen in a region close to the terminator when the Moon was in a waxing phase.
Thanks to the low position of the sun (in the back of the LEM) over the horizon, the overall amount of light was reduced and the structures on the surface cast long shadows. These create contrasting markers in the blinding whiteness to assist the LEM pilots in recognizing and avoiding potential obstacles that might be a hazard to the landing procedure.
In hindsight, it seems totally natural and obvious to plan the landings this way, but it never actually occurred to me until I listened to 13 Minutes to the Moon.
- That inspired the title of the podcast series ↩
- This conclusion is also backed up by the flight path of the mission, see e.g this illustration. ↩
Today’s release of weather app Carrot Weather comes with some interesting changes. For the first time, the app supports MeteoGroup as a data source.
I have been using the MeteoGroup’s own app WeatherPro for years. In my personal experience, the quality of prediction data for Europe is more accurate in comparison to other “global” data sources that are also supported by Carrot.
However, I don’t like the concept of presenting the data in WeatherPro very much and kept looking for replacements. Hello Weather is certainly a viable alternative, but for some reason it did not stick for long. At some point, I switched to Carrot, and my previous data source of choice (The Weather Channel) within Carrot delivered okay-ish results I could live with. Win-win, sort of.
Without any evidence for it to happen, I nevertheless kept up hope that the future with bring access to higher-quality data, specifically the data that power WeatherPro.
According to Carrot’s release notes of today’s release, The Weather Channel terminated the contract because it does not want to provide data to competing apps any longer in order to get more users to switch to their own apps. Here’s hope that MeteoGroup does not come to a similar conclusion any time soon.
- For reasons I could only speculate about. ↩
This is not a new thing, but fits perfectly to the Apollo 11 buzz (no pun intended) that we are going to get through in the coming month.
Make sure to go to firstmenonthemoon.com and replay the final descent of the landing module along with synchronised communication in the mission control room in Houston and between CAPCOM, Columbia, and Eagle.
I came across this video on Twitter. The video demonstrates a non-trivial, but still not overly complicated example of a SwiftUI declaration.
I have to say that, after looking at the SwiftUI declarations I’m actually undecided whether my concerns voiced in this article are warranted or not. Yes, the chained expressions are sort of a mess and it remains to be seen whether such code is maintainable.
On the other hand, the presentation of the relevant information is not as obscured as I feared it would be. In other words, by looking at the code it is in my opinion possible to understand what’s happening.
I very much like the idea of a declarative definition of a user interface and thus I’m motivated to kick SwiftUI’s tires. Having worked halfway through the tutorials, I’m a little bit concerned about the scalability of SwiftUI. Sure, a DSL is always going to win the elevator pitch because it looks so nice and elegant.
At least the WWDC videos about SwiftUI that I have watched so far restrict themselves to more or less the bare minimum of complexity that you might want to add to the declarative definition of an app’s user interface. And already in the simple cases SwiftUI starts to get messy, e.g. with respect to formatting chained expressions.
My (probably not very popular) point is that I personally believe that defining a scalable XML-based format will way more likely yield a good result than designing a DSL for the same purpose because scalability is already baked into XML itself. You can wrap tags around tags around tags real simple and the resulting complexity ist still kept under control.
In many cases, the problem is that the design of a DSL will start with the simple and elegant cases and stay with the simple and elegant cases for some time – until it needs to expand towards supporting higher-level complexity. But: if the need for supporting higher-level complexity hasn’t been considered from the start than the DSL will fall apart pretty quickly.
To drive my point home, I have actually done some work in declarative UI definition in the Windows world, specifically the Windows Presentation Foundation (WPF). Microsoft uses a dialect of XML named XAML for the UI declaration.
I fully understand that XAML in particular has lots of problems and isn’t as much fun as you might want it to be. But still, given the choice between declaring a UI in an XML dialect or else by means of using a DSL (like SwiftUI) I would personally actually very likely prefer the XML.
- That does not even include the point where suddenly imperative paradigms are mixed into the declarative language. ↩
- Which – let’s face it – it will inevitably have to. ↩
- Yes, in a text editor. There is a graphical frontend for XAML. My experiences with the graphical frontend are staggering and I have yet to come across anyone seriously endorsing it. ↩
It didn’t come as a total surprise to me, but it was very close. Only five minutes or so before the keynote started I saw a retweet from someone who mentioned the possibility that a dedicated “branch” of iOS existed (named iPadOS) specifically for the iPad.
And then it became a reality. The strange thing about the naming is that iOS originally has been conceived as iPhoneOS and was only later rebranded as iOS when the iPad entered the market.
Maybe it’s just me, but wouldn’t it be ironic to give all sorts of devices their own specific OS-branding and the iPhone, arguably the most important device in the lineup (and the one that started this whole family of OS-variations) stays with the generic iOS branding?
Maybe a further rebranding may happen when final versions of <modifier>OS 13 are released to the world. And maybe the term iOS finally becomes some sort of abstract base marketing term for the entire class of OS.
Feature-wise, I’m delighted about what’s in store for the next major OS release on my iPad. Although I might want to mention that font management and a download manager for Safari would also make excellent features for the iPhone, just sayin’. I keep hoping for a trickle-down of such useful details to the iPhone at some point in time.
Overall, it is a good sign that Apple gives iPadOS such a prominence. At the very least it means that no WWDC will happen from now on where the iPad is not going to get some new OS-features. Neglect has happened way to many times in the past.
As Steve Troughton-Smith observed, they can’t ignore it anymore. Apple put this burden on themselves for good.
I have no use for pushing to blogging platforms other than WordPress. But man, the split view feature with a live preview in Ulysses 16 is genius.
David Smith‘s latest app, a calendar app named CalZones that is putting the focus on facilitating the handling of events across multiple timezones, is making quite a splash these days.
The app is mentioned and/or reviewed on many tech blogs and seems to receive universal praise, as far as I can see. And the praise is not unjustified. CalZones comes with some really fresh design and animation ideas, and – as mentioned before – the ability to take the friction out of handling multiple time zones in calendar events is certainly a compelling selling proposition.
The MacStories review mentions some „strategic“ shortcomings Federico identified during the beta period. My personal list of features that I would like to see added to CalZones is more about the practical aspects of working with a calendar app:
- It does not seem as if there is a touch gesture for navigating to the current date. However, when using a hardware keyboard, the app supports the shortcut ⌘ T for this purpose.
- No search. This is big, a calendar app that does not support search is not going to make it on my devices.
- I haven‘t found any way to create a calendar entry boundary with a finer granularity than 15 Minutes. For example, you can start an event at 9:15, but 9:12 is apparently not foreseen. Such a limitation, if confirmed, is not exactly compatible with calendar entries for train or flight connections.
Despite all the complaints, I still think that Smith delivered a 1.0 that should be a solid basis for further iterations.
- At least for flight connections, I personally see a relevance for the ability to work with different timezones. ↩