Since the very first version of Swift, we’ve been able to define our various types as either classes, structs, or enums. But now, with the launch of Swift 5.5 and its built-in concurrency system, a new type declaration keyword has been added to the mix — actor.So, in this article, let’s explore the concept of actors, and what kinds of problems that we could solve by defining custom actor types within our code bases.Preventing data racesOne of the core advantages of Swift’s new actor types is that they can help us prevent so-called “data races” — that is, memory corruption issues that can occur when two separate threads attempt to access or mutate the same data at the same time.To illustrate, let’s take a look at the following UserStorage type, which essentially provides a way for us to pass a dictionary of User models around <a href="https://www.swiftbysundell.com/basics/value-and-reference-types">by reference, rather than by value:<pre class="splash">class UserStorage { private var users = [User.ID: User]() func store(_ user: User) { users[user.id] = user } func user(withID id: User.ID) -> User? { users[id] }}</pre>By itself, there’s really nothing wrong with the above implementation. However, if we were to use that UserStorage class within a multi-threaded environment, then we could quickly run into various kinds of data races, since our implementation currently performs its internal mutations on whatever thread or DispatchQueue that it was called on.In other words, our UserStorage class currently isn’t <em>thread-safe</em>.One way to address that would be to manually dispatch all of our reads and writes on a specific DispatchQueue, which would ensure that those operations always happen in serial order, regardless of which thread or queue that our UserStorage methods will be used on:<pre class="splash">class UserStorage { private var users = [User.ID: User]() private let queue = DispatchQueue(label: "UserStorage.sync") func store(_ user: User) { queue.sync { self.users[user.id] = user } } func user(withID id: User.ID) -> User? { queue.sync { self.users[id] } }}</pre>The above implementation works, and now successfully protects our code against data races, but it does have a quite significant flaw. Since we’re using the sync API to dispatch our dictionary access code, our two methods will cause the current execution to be <em>blocked</em> until those dispatch calls have finished.That can become problematic if we end up performing a lot of concurrent reads and writes, since each caller would be completely blocked until that particular UserStorage call has finished, which could result in poor performance and excessive memory usage. This type of problem is often referred to as “data contention”.One way to fix that problem would be to instead make our two UserStorage methods fully asynchronous, which involves using the async dispatch method (rather than sync), and in the case of retrieving a user, we’d also have to use something like a closure to notify the caller when its requested model was loaded:<pre class="splash">class UserStorage { private var users = [User.ID: User]() private let queue = DispatchQueue(label: "UserStorage.sync") func store(_ user: User) { queue.async { self.users[user.id] = user } } func loadUser(withID id: User.ID, handler: @escaping (User?) -> Void) { queue.async { handler(self.users[id]) } }}</pre>Again, the above certainly works, and has been one of the preferred ways to implement thread-safe data access code prior to Swift 5.5. However, while closures are a fantastic tool, having to wrap all of our User handling code within a closure could definitely make that code more complex — especially since we had to make our loadUser method’s closure argument <em>escaping</em>.A case for an actorThis is exactly the type of situation in which Swift’s new actor types can be incredibly useful. Actors work much like classes (that is, they are <a href="https://www.swiftbysundell.com/basics/value-and-reference-types">passed…
Swift by Sundell
Value and Reference Types | Swift by Sundell
Swift types can, in general, be divided into two categories — value types and reference types — which determines how they will be passed between different functions and other code scopes. Let’s take a look at what some of the practical implications of that…
When writing asynchronous code using Swift’s new built-in concurrency system, creating a Task gives us access to a new asynchronous context, in which we’re free to call async-marked APIs, and to perform work in the background.But besides enabling us to encapsulate a piece of asynchronous code, the Task type also lets us control the way that such code is run, managed, and potentially cancelled.Bridging the gap between synchronous and asynchronous codePerhaps the most common way to use a Task within UI-based apps is to have it act as a bridge between our synchronous, main thread-bound UI code, and any background operations that are used to fetch or process the data that our UI is rendering.For example, here we’re using a Task within a UIKit-based ProfileViewController to be able to use an async-marked API to load the User model that our view controller should render:<pre class="splash">class ProfileViewController: UIViewController { private let userID: User.ID private let loader: UserLoader private var user: User? ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) Task { do { let user = try await loader.loadUser(withID: userID) userDidLoad(user) } catch { handleError(error) } } } ... private func handleError(_ error: Error) { // Show an error view ... } private func userDidLoad(_ user: User) { // Render the user's profile ... }}</pre>One thing that’s really interesting about the above code is that there are no self captures, no DispatchQueue.main.async calls, no tokens or cancellables that need to be retained, or any other kind of “bookkeeping” that we normally have to do when performing asynchronous operations using tools like closures or Combine.So how exactly are we able to perform a network call (which is definitely going to be executed on a background thread), and then directly call UI-updating methods like userDidLoad and handleError, without first manually dispatching those calls using DispatchQueue.main?This is where Swift’s new <a href="https://www.swiftbysundell.com/articles/the-main-actor-attribute">MainActor</a> attribute comes in, which automatically ensures that UI-related APIs (such as those defined within a UIView or UIViewController) are correctly dispatched on the main thread. So, as long as we’re writing our asynchronous code using Swift’s new concurrency system, and within such a MainActor-marked context, then we no longer have to worry about accidentally performing UI updates on a background queue. Neat!Another thing that’s interesting about our above implementation is that we’re not required to manually retain our loading task in order for it to complete. That’s because asynchronous tasks are not automatically cancelled when their corresponding Task handle is deallocated — they just keep executing in the background.Referencing and cancelling a taskHowever, in this particular case, we probably <em>do</em> want to maintain a reference to our loading task, since we might want to cancel it when our view controller disappears, and we probably also want to prevent duplicate tasks from being performed in case the system calls viewWillAppear while a task is already in progress:<pre class="splash">class ProfileViewController: UIViewController { private let userID: User.ID private let loader: UserLoader private var user: User? private var loadingTask: Task<Void, Never>? ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) guard loadingTask == nil else { return } loadingTask = Task { do { let user = try await loader.loadUser(withID: userID) userDidLoad(user) } catch { handleError(error) } loadingTask = nil } } override func viewDidDisappear(_ animated: Bool) { super.viewDidDisappear(animated) loadingTask?.cancel() loadingTask = nil } ...}</pre>Note how a Task has two generic types — the first indicates what type of output that it returns (which is Void in our case, since our task simply forwards its loaded User model onto our view controller’s methods), and…
Swift by Sundell
Using the MainActor attribute to automatically dispatch UI updates on the main queue | Swift by Sundell
How the MainActor attribute eliminates the need for us to manually dispatch UI updates on the main queue when using Swift 5.5’s new concurrency system.
Very often, making code easy to unit test tends to go hand-in-hand with improving that code’s separation of concerns, its state management, and its overall architecture. In general, the more well-abstracted and organized our code is, the easier it tends to be to test it in an automated fashion.However, in an effort to make code more testable, we can very often find ourselves introducing a ton of new protocols and other kinds of abstractions, and end up making our code significantly more complicated in the process — especially when testing asynchronous code that relies on some form of networking.But does it <em>really</em> have to be that way? What if we could actually make our code fully testable in a way that doesn’t require us to introduce any new protocols, mocking types, or complicated abstractions? Let’s explore how we could make use of Swift’s new async/await capabilities to make that happen.Injected networkingLet’s say that we’re working on an app that includes the following ProductViewModel, which uses the very common pattern of getting its URLSession (that it’ll use to perform network calls) injected through its initializer:<pre class="splash">class ProductViewModel { var title: String { product.name } var detailText: String { product.description } var price: Price { product.price(in: localUser.currency) } ... private var product: Product private let localUser: User private let urlSession: URLSession init(product: Product, localUser: User, urlSession: URLSession = .shared) { self.product = product self.localUser = localUser self.urlSession = urlSession } func reload() async throws { let url = URL.forLoadingProduct(withID: product.id) let (data, _) = try await urlSession.data(from: url) let decoder = JSONDecoder() product = try decoder.decode(Product.self, from: data) }}</pre>Now, there’s really nothing <em>wrong</em> with the above code. It works, and it uses <a href="https://www.swiftbysundell.com/articles/different-flavors-of-dependency-injection-in-swift">dependency injection</a> to avoid accessing URLSession.shared directly as a singleton (which already has huge benefits in terms of testing and overall architecture), even though it does use that shared instance by default, for convenience.However, it could definitely be argued that inlining raw network calls within types like view models and view controllers is something that ideally should be avoided — as that’d create a better separation of concerns within our project, and would let us reuse that network code whenever we need to perform a similar request elsewhere.So, to continue iterating on the above example, let’s extract our view model’s product loading code into a dedicated ProductLoader type instead:<pre class="splash">class ProductLoader { private let urlSession: URLSession init(urlSession: URLSession = .shared) { self.urlSession = urlSession } func loadProduct(withID id: Product.ID) async throws -> Product { let url = URL.forLoadingProduct(withID: id) let (data, _) = try await urlSession.data(from: url) let decoder = JSONDecoder() return try decoder.decode(Product.self, from: data) }}</pre>If we then make our view model use that new ProductLoader, rather than interacting with URLSession directly, then we can simplify its implementation quite significantly — as it can now simply call loadProduct whenever it’s asked to reload its underlying data model:<pre class="splash">class ProductViewModel { ... private var product: Product private let localUser: User private let loader: ProductLoader init(product: Product, localUser: User, loader: ProductLoader) { self.product = product self.localUser = localUser self.loader = loader } func reload() async throws { product = try await loader.loadProduct(withID: product.id) }}</pre>So that’s already quite an improvement. But what if we now wanted to implement a few unit tests to ensure that our view model behaves as we’d expect? To do that, we’re going to need to <em>mock</em> our app’s networking one way or another, as we definitely don’t want to perform any real…
Swift by Sundell
Different flavors of dependency injection in Swift | Swift by Sundell
Just like with most programming techniques, there are multiple
Most often, we want our various asynchronous tasks to start as soon as possible after they’ve been created, but sometimes we might want to add a slight delay to their execution — perhaps in order to give another task time to complete first, or to add some form of “debouncing” behavior.Although there’s no direct, built-in way to run a Swift Task with a certain amount of delay, we can achieve that behavior by telling the task to <em>sleep</em> for a given number of nanoseconds before we actually start performing its operation:<pre class="splash">Task { // Delay the task by 1 second: try await Task.sleep(nanoseconds: 1_000_000_000) // Perform our operation ...}</pre>Calling Task.sleep is very different from using things like the sleep system function, as the Task version is completely non-blocking in relation to other code.While the above works perfectly fine if we want our task to always continue executing after its delay, we might also want to be able to cancel the task before it has started performing its work.Since Swift’s concurrency system uses a <em>cooperative</em> cancellation model, we need to explicitly check whether our delayed task was cancelled during its sleeping interval — since the system will only <em>mark</em> our task as cancelled when that happens, and it’s then up to us to actually <em>handle</em> that cancellation.One way to do that would be to call the static checkCancellation method right after our task has finished sleeping, which would enable us to use a delayed task to do things like only show a loading spinner in case an async operation took more than 150 milliseconds to complete:<pre class="splash">class VideoViewController: UIViewController { ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let loadingSpinnerTask = Task { try await Task.sleep(nanoseconds: 150_000_000) try Task.checkCancellation() showLoadingSpinner() } Task { await prepareVideo() loadingSpinnerTask.cancel() hideLoadingSpinner() } } ...}</pre>Please note that the above is not meant to be a complete example on how to use a Task to load a view controller’s content. For example, we probably want to check whether an existing loading task is already in progress before starting a new one. To learn more, check out <a href="https://www.swiftbysundell.com/articles/the-role-tasks-play-in-swift-concurrency">“What role do Tasks play within Swift’s concurrency system?”</a>.The way that built-in Task.checkCancellation method works is by throwing an error in case the current task has been cancelled at that point, which in turn will cause our code execution to exit out of the task’s scope.Now, if we’re going to use a lot of delayed tasks within a given code base, then it might be worth defining a simple abstraction that would let us create such delayed tasks more easily — for example by enabling us to use a more standard TimeInterval value to define second-based delays, rather than having to use nanoseconds:<pre class="splash">extension Task where Failure == Error { static func delayed( byTimeInterval delayInterval: TimeInterval, priority: TaskPriority? = nil, operation: @escaping @Sendable () async throws -> Success ) -> Task { Task(priority: priority) { let delay = UInt64(delayInterval * 1_000_000_000) try await Task<Never, Never>.sleep(nanoseconds: delay) try Task<Never, Never>.checkCancellation() return try await operation() } }}</pre>The reason we have to explicitly mark our sleep and checkCancellation tasks as Task<Never, Never> is because those methods are only available on that Task specialization, and within the scope of our extension, the symbol Task refers to the <em>current</em> specialization that our extension is being used with.With the above extension in place, we can now simply call Task.delayed whenever we want to create a delayed task. The only downside of that approach is that we now have to manually capture self within those task closures:<pre class="splash">class VideoViewController: UIViewController { ... override func viewWillAppear(_ animated:…
Swift by Sundell
What role do Tasks play within Swift’s concurrency system? | Swift by Sundell
How Swift’s new Task type works, and how it enables us to encapsulate, observe, and control the way that our asynchronous code is executed.
Explore Swift’s built-in concurrency system, and how to use tools like async/await and actors to write concurrent code in robust and efficient ways.→ Check it out
via Swift by Sundell https://ift.tt/3dVR9Mj
via Swift by Sundell https://ift.tt/3dVR9Mj
Swift by Sundell
Discover Concurrency on Swift by Sundell
Explore Swift’s built-in concurrency system, and how to use tools like async/await and actors to write concurrent code in robust and efficient ways.
When building modern applications, it’s incredibly common to want to trigger some form of asynchronous action in response to a UI event. For example, within the following SwiftUI-based PhotoView, we’re using a Task to trigger an asynchronous onLike action whenever the user tapped that view’s button:<pre class="splash">struct PhotoView: View { var photo: Photo var onLike: () async -> Void var body: some View { VStack { Image(uiImage: photo.image) Text(photo.description) Button(action: { Task { await onLike() } }, label: { Image(systemName: "hand.thumbsup.fill") }) .disabled(photo.isLiked) } }}</pre>The above implementation is definitely a good starting point. However, if our Photo model’s isLiked property isn’t updated until <em>after</em> our asynchronous call has completed, then we might end up with duplicate onLike calls if the user taps the button multiple times in quick succession — since we’re currently only disabling our button once that property has been set to true.Now, we <em>could</em> choose to fix that issue by performing a local model update right before we call onLike. However, doing so would introduce <em>multiple sources of truth</em> for our data model, which is something that’s typically good to avoid. So, ideally, we’d like to keep having our PhotoView simply render the Photo model that it gets from its parent view, without having to make any local copies or modifications.So, instead, let’s explore how we could make our button disable itself while its action is being performed. Since that’ll involve introducing additional state that’s only really relevant to our button itself — let’s encapsulate all of that code within a new AsyncButton view that’ll also display a loading spinner while waiting for its async action to complete:<pre class="splash">struct AsyncButton<Label: View>: View { var action: () async -> Void @ViewBuilder var label: () -> Label @State private var isPerformingTask = false var body: some View { Button( action: { Task { isPerformingTask = true await action() isPerformingTask = false } }, label: { ZStack { // We hide the label by setting its opacity // to zero, since we don't want the button's // size to change while its task is performed: label().opacity(isPerformingTask ? 0 : 1) if isPerformingTask { ProgressView() } } } ) .disabled(isPerformingTask) }}</pre>If you’re curious about the @ViewBuilder attribute that the above button’s label closure is annotated with, then check out <a href="https://www.swiftbysundell.com/tips/annotating-properties-with-result-builder-attributes">“Annotating properties with result builder attributes”</a>.Since our new AsyncButton has an API that perfectly matches SwiftUI’s built-in Button type, we’ll be able to update our PhotoView by simply changing the type of button that it creates, and by removing the Task within its action closure (since we can now use await directly within that closure, as it’s marked with the async keyword):<pre class="splash">struct PhotoView: View { var photo: Photo var onLike: () async -> Void var body: some View { VStack { Image(uiImage: photo.image) Text(photo.description) AsyncButton(action: { await onLike() }, label: { Image(systemName: "hand.thumbsup.fill") }) .disabled(photo.isLiked) } }}</pre>Very nice! Now, if that was the only place within our app in which we needed to perform the above kind of asynchronous action, then we could wrap things up here. But let’s say that our code base also contains many other, similar async-function-calling buttons, and that we’d like to reuse our new AsyncButton within those places as well.To make things even more interesting, let’s also say that within some parts of our code base, we don’t want to show a loading spinner while our async action is being performed, and that we’d also like to have the option to perform multiple actions at the same time.To support those kinds of options, let’s introduce an ActionOption enum, which will enable each part of our code base to tweak how it wants our AsyncButton to behave when performing its action:<pre…
Swift by Sundell
Annotating properties with result builder attributes | Swift by Sundell
Starting in Swift 5.4, result builder attributes can now be attached directly to closure-based properties. Let’s take a look at how that new feature can be used.
In Swift, there are two ways to capture self as a strong reference within an escaping closure. The first is to explicitly use the self keyword whenever we’re calling a method or accessing a property on the current object within such a closure.For example, the following VideoViewController performs such a strong capture in order to be able to call two of its own methods whenever it finished preparing its Video model:
via Swift by Sundell https://ift.tt/3zhzF6X
class VideoViewController: UIViewController { private var video: Video ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) prepareVideo { self.movePlayhead(to: self.video.lastPlayheadPosition) self.startPlaybackIfNeeded() } }}The reason that it’s safe to capture self strongly within the above example is because the closure that we pass to prepareVideo isn’t stored in a way that would potentially cause a retain cycle. For more on that topic, check out “Is using [weak self] always required when working with closures?”.The above is certainly the most well-known way to access properties and methods on self within an escaping closure. But there’s also another, lesser-known technique that lets us reference self just once — and that’s to use a capture list to set up our reference. While capture lists are most commonly used when we want to create a weak or unowned reference to self, or when we want to capture a specific set of properties, they can also be used in situations like the above in order to avoid having to retype that same self prefix multiple times — like this:class VideoViewController: UIViewController { private var video: Video ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) prepareVideo { [self] in movePlayhead(to: video.lastPlayheadPosition) startPlaybackIfNeeded() } }}Note how we can now call methods and access properties on self just like if we were writing our code in a context other than an escaping closure, which is quite neat!Of course, we also always have the option to move the logic that we want to perform to a dedicated method instead, which we could then simply call from within our closure:class VideoViewController: UIViewController { private var video: Video ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) prepareVideo { self.videoWasPrepared() } } private func videoWasPrepared() { movePlayhead(to: video.lastPlayheadPosition) startPlaybackIfNeeded() }}It’s important to re-iterate that the above techniques are specifically for creating strong references to objects. When we don’t want to retain the current object, we’d be much better off using either a weak or unowned self capture, or to avoid capturing self entirely. For more on that topic, check out “Swift’s closure capturing mechanics”. Also, when working with value types, the compiler will implicitly capture self for us, which you can read more about here.Thanks for reading, and Happy New Year! 🎉via Swift by Sundell https://ift.tt/3zhzF6X
Swift by Sundell
Q&A: Is using [weak self] always required when working with closures? | Swift by Sundell
Weekly Swift articles, podcasts and tips by John Sundell
A challenge that many developers face as they maintain various code bases over time is how to neatly connect different frameworks and APIs in a way that properly adheres to the conventions of each technology involved.For example, as teams around the world are starting to adopt Swift 5.5’s async/await-powered <a href="https://www.swiftbysundell.com/discover/concurrency">concurrency system</a>, we’ll likely find ourselves in situations where we need to create versions of our async-marked APIs that are compatible with other asynchronous programming techniques — such as <a href="https://www.swiftbysundell.com/discover/combine">Combine</a>.While we’ve already taken a look at how Combine relates to concurrency APIs like <a href="https://www.swiftbysundell.com/articles/async-sequences-streams-and-combine">async sequences and streams</a>, and how we can make it possible to call async-marked functions <a href="https://www.swiftbysundell.com/articles/calling-async-functions-within-a-combine-pipeline">within a Combine pipeline</a> — in this article, let’s explore how we could make it easy to create Combine-based variants of any async API, regardless of whether it was defined by us, by Apple, or as part of a third-party dependency.Async futuresLet’s say that an app that we’re working on contains the following ModelLoader, which can be used to load any Decodable model over the network. It performs its work through an async function that looks like this:<pre class="splash">class ModelLoader<Model: Decodable> { ... func loadModel(from url: URL) async throws -> Model { ... }}</pre>Now let’s say that we’d also like to create a Combine-based version of the above loadModel API, for example in order to be able to call it within specific parts of our code base that might’ve been written in a more reactive style using the Combine framework.We could of course choose to write that sort of compatibility code specifically for our ModelLoader type, but since this is a general problem that we’re likely to encounter multiple times when working with Combine-based code, let’s instead create a more generic solution that we’ll be able to easily reuse across our code base.Since we’re dealing with async functions that either return a single value, or throw an error, let’s use Combine’s Future publisher to wrap those calls. That publisher type was specifically built for these kinds of use cases, since it gives us a closure that can be used to report a single Result back to the framework.So let’s go ahead and extend the Future type with a convenience initializer that makes it possible to initialize an instance with an async closure:<pre class="splash">extension Future where Failure == Error { convenience init(operation: @escaping () async throws -> Output) { self.init { promise in Task { do { let output = try await operation() promise(.success(output)) } catch { promise(.failure(error)) } } } }}</pre>For more information on how Combine’s Future type works, check out <a href="https://www.swiftbysundell.com/articles/using-combine-futures-and-subjects">“Using Combine’s futures and subjects”</a>.The power of creating an abstraction like that, which isn’t tied to any specific use case, is that we’ll now be able to apply it to <em>any</em> async API that we want to make Combine-compatible. All it takes is a few lines of code that calls the API that we’re looking to bridge within a closure that’s passed to our new Future initializer — like this:<pre class="splash">extension ModelLoader { func modelPublisher(for url: URL) -> Future<Model, Error> { Future { try await self.loadModel(from: url) } }}</pre>Neat! Note how we could’ve chosen to give that Combine-based version the same loadModel name that our async-powered one has (since Swift supports method overloading). However, in this case, it might be a good idea to clearly separate the two, which is why the above new API has a name that explicitly includes the word “Publisher”.Reactive async sequencesAsync sequences and streams are perhaps the closest that the Swift…
Swift by Sundell
Discover Concurrency on Swift by Sundell
Explore Swift’s built-in concurrency system, and how to use tools like async/await and actors to write concurrent code in robust and efficient ways.
Swift offers many different built-in ways to iterate over collections (such as arrays, sets, and dictionaries) — the most basic of which being for loops, which let us run a piece of code for each element that was found within a given collection. For example, here we’re looping through an array of names, and we’re then outputting each name by printing it into the console:<pre class="splash">let names = ["John", "Emma", "Robert", "Julia"]for name in names { print(name)}</pre>An alternative way of accomplishing the same thing would be to instead call the forEach method on our array of names, which lets us pass a <a href="https://www.swiftbysundell.com/basics/closures">closure</a> that’ll be run for each element:<pre class="splash">names.forEach { name in print(name)}</pre>One key difference between a for loop and forEach, though, is that the latter doesn’t enable us to break the iteration in order to stop it once a given condition has been met. For example, when going back to using a for loop again, we could decide to stop the iteration once the name <em>Robert</em> was encountered:<pre class="splash">let names = ["John", "Emma", "Robert", "Julia"]for name in names { if name == "Robert" { break } print(name) // Will only print "John" and "Emma"}</pre>There are also many other ways to use for loops in Swift. For example, if we’d like to gain access to what index that we’re currently handling as part of our iteration, then we could instead choose to base our loop on a range that goes from zero to the number of elements within our collection. We could then use the Array type’s subscripting feature to retrieve the current element for that index — like this:<pre class="splash">for index in 0..<names.count { print(index, names[index])}</pre>Another way to write the exact same loop would be to instead iterate over our array’s indicies, rather than constructing a range manually:<pre class="splash">for index in names.indices { print(index, names[index])}</pre>Yet another approach would be to use the enumerated method to convert our array into a sequence containing tuples that pair each index with its associated element:<pre class="splash">for (index, name) in names.enumerated() { print(index, name)}</pre>Note that the enumerated method always uses Int-based offsets, which in the case of Array is a perfect match, since that collection also uses Int values as its indices.Next, let’s take a look at while loops, which offer a way for us to repeatedly run a block of code as long as a given boolean condition remains true. For example, here’s how we could use a while loop to keep appending each name within our names array to a string, as long as that string contains less than 8 characters:<pre class="splash">let names = ["John", "Emma", "Robert", "Julia"]var index = 0var string = ""while string.count < 8 { string.append(names[index]) index += 1}print(string) // "JohnEmma"</pre>Another way to construct a while loop (which perhaps isn’t as commonly used in Swift as in other languages) is by using a separate repeat block, which will also get repeatedly run as long as our while condition evaluates to true:<pre class="splash">let names = ["John", "Emma", "Robert", "Julia"]var index = 0var string = ""repeat { string.append(names[index]) index += 1} while string.count < 8print(string) // "JohnEmma"</pre>The key difference between repeat and a stand-alone while loop is that a repeat block will always be run at least once, even if the attached while condition initially evaluates to false.One important thing to keep in mind when using while loops, though, is that it’s up to us to make sure that each loop is ended at an appropriate time — either by manually using break (like we did earlier when using a for loop), or by ensuring that our loop’s boolean condition is met once the iteration should be terminated.For example, when constructing our name-based string value, we probably want to make sure that the current index won’t go out of the bounds of our names array — since otherwise our app would crash…
Swift by Sundell
Closures | Swift by Sundell
A look at a few different ways that closures can be defined and used in Swift, the flexibility of Swift’s closure syntax, and how behaviors like escaping and capturing may impact our code.
SwiftUI offers several different ways for us to create stacks of overlapping views that can be arranged along the Z axis, which in turn enables us to define various kinds of overlays and backgrounds for the views that we build. Let’s explore some of those built-in stacking methods and what sort of UIs that they enable us to create.ZStacksLike its name implies, SwiftUI’s ZStack type is the Z-axis equivalent of the horizontally-oriented HStack and the vertical VStack. When placing multiple views within a ZStack, they’re (by default) rendered back-to-front, with the first view being placed at the back. For example, here we’re creating a full-screen ContentView, which renders a gradient with a text stacked on top:<pre data-preview="full-screen-gradient">struct ContentView: View { var body: some View { ZStack { LinearGradient( colors: [.orange, .red], startPoint: .topLeading, endPoint: .bottomTrailing ) .ignoresSafeArea() Text("Swift by Sundell") .foregroundColor(.white) .font(.title) } }}</pre>Tip: You can use the above code sample’s PREVIEW button to see what it’ll look like when rendered.The reason that the above ContentView is rendered across all of the available screen space is because a LinearGradient will always occupy as much space as possible by default, and since a any stack’s size defaults to the total size of its children, that leads to our ZStack being resized to occupy that same full-screen space.The background modifierHowever, sometimes we might not want a given background to stretch out to fill all available space, and while we <em>could</em> address that by applying various sizing modifiers to our background view, SwiftUI ships with a built-in tool that automatically resizes a given view’s background to perfectly fit its parent — the background modifier.Here’s how we could use that modifier to instead apply our LinearGradient background directly to our Text-based view, which makes that background take on the exact same size as our text itself (including its padding):<pre data-preview="gradient-matched-size">struct ContentView: View { var body: some View { Text("Swift by Sundell") .foregroundColor(.white) .font(.title) .padding(35) .background( LinearGradient( colors: [.orange, .red], startPoint: .topLeading, endPoint: .bottomTrailing ) ) }}</pre>The reason that the padding is included when calculating our background’s size in the above example is because we’re applying the padding modifier <em>before</em> adding our background. To learn more about that, check out <a href="https://www.swiftbysundell.com/questions/swiftui-modifier-order">“When does the order of SwiftUI modifiers matter, and why?”.One thing that’s important to point out, though, is that even though a view’s background does indeed get resized according to the parent view itself, there’s no form of clipping applied by default. So if we were to give our LinearGradient an explicit size that’s larger than its parent, then it’ll actually be rendered out of bounds (which we can clearly demonstrate by adding a border to our main Text-based view):<pre data-preview="gradient-out-of-bounds">struct ContentView: View { var body: some View { Text("Swift by Sundell") .foregroundColor(.white) .font(.title) .padding(35) .background( LinearGradient( colors: [.orange, .red], startPoint: .topLeading, endPoint: .bottomTrailing ) .frame(width: 300, height: 300) ) .border(Color.blue) }}</pre>There are multiple ways to apply clipping to a view, though, which would remove the above sort of out-of-bounds rendering. For example, we could use either the clipped or clipShape modifier to tell the view to apply a clipping mask to its bounds, or we could give our view rounded corners (which also introduces clipping) — like this:<pre data-preview="gradient-rounded-corners">struct ContentView: View { var body: some View { Text("Swift by Sundell") .foregroundColor(.white) .font(.title) .padding(35) .background( LinearGradient( colors: [.orange, .red], startPoint: .topLeading, endPoint: .bottomTrailing ) .frame(width: 300, height:…
Swift by Sundell
Q&A: When does the order of SwiftUI modifiers matter, and why? | Swift by Sundell
Weekly Swift articles, podcasts and tips by John Sundell
Sometimes, we might want to automatically retry an asynchronous operation that failed, for example in order to work around temporary network problems, or to re-establish some form of connection.Here we’re doing just that when using Apple’s <a href="https://www.swiftbysundell.com/discover/combine">Combine framework</a> to implement a network call, which we’ll retry up to 3 times before handling any error that was encountered:<pre class="splash">struct SettingsLoader { var url: URL var urlSession = URLSession.shared var decoder = JSONDecoder() func load() -> AnyPublisher<Settings, Error> { urlSession .dataTaskPublisher(for: url) .map(\.data) .decode(type: Settings.self, decoder: decoder) .retry(3) .eraseToAnyPublisher() }}</pre>Note that the above example will unconditionally retry our loading operation (up to 3 times) regardless of what kind of error that was thrown.But what if we wanted to implement something similar, but using <a href="https://www.swiftbysundell.com/discover/concurrency">Swift Concurrency</a> instead? While Combine’s Publisher protocol includes the above retry operator as a built-in API, neither of Swift’s new concurrency APIs offer something similar (at least not at the time of writing), so we’ll have to get creative!One really neat aspect of Swift’s new concurrency system, and async/await in particular, is that it enables us to mix various asynchronous calls with standard control flow constructs, such as if statements and for loops. So, one way to implement automatic retries for await-marked calls would be to place the asynchronous code that we want to run within a loop that iterates over a range, which in turn describes how many retries that we wish to perform — like this:<pre class="splash">struct SettingsLoader { var url: URL var urlSession = URLSession.shared var decoder = JSONDecoder() func load() async throws -> Settings { // Perform 3 attempts, and retry on any failure: for _ in 0..<3 { do { return try await performLoading() } catch { // This 'continue' statement isn't technically // required, but makes our intent more clear: continue } } // The final attempt (which throws its error if it fails): return try await performLoading() } private func performLoading() async throws -> Settings { let (data, _) = try await urlSession.data(from: url) return try decoder.decode(Settings.self, from: data) }}</pre>The above implementation works perfectly fine, but if we’re looking to add the same kind of retrying logic in multiple places throughout a project, then it might be worth moving that code into some form of utility that could be easily reused.One way to do just that would be to extend Swift’s Task type with a convenience API that lets us quickly create such auto-retrying tasks. Our actual logic can remain almost identical to what it was before, but we’ll parameterize the maximum number of retries, and we’ll also add support for cancellation as well:<pre class="splash">extension Task where Failure == Error { @discardableResult static func retrying( priority: TaskPriority? = nil, maxRetryCount: Int = 3, operation: @Sendable @escaping () async throws -> Success ) -> Task { Task(priority: priority) { for _ in 0..<maxRetryCount { try Task<Never, Never>.checkCancellation() do { return try await operation() } catch { continue } } try Task<Never, Never>.checkCancellation() return try await operation() } }}</pre>That’s already a really useful, and completely reusable implementation, but let’s take things one step further, shall we?When retrying asynchronous operations, it’s very common to want to add a bit of delay between each retry — perhaps in order to give an external system (such as a server) a chance to recover from some kind of error before we make another attempt at calling it. So let’s also add support for such delays, which can easily be done using the built-in Task.sleep API:<pre class="splash">extension Task where Failure == Error { @discardableResult static func retrying( priority: TaskPriority? = nil, maxRetryCount: Int = 3, retryDelay: TimeInterval…
Swift by Sundell
Discover Combine on Swift by Sundell
Discover how Apple’s Combine framework can be used to model increasingly complex asynchronous operations as reactive pipelines that emit values over time.
Managing an app’s memory is something that tends to be especially tricky to do within the context of asynchronous code, as various objects and values often need to be captured and retained over time in order for our asynchronous calls to be performed and handled.While Swift’s relatively new async/await syntax does make many kinds of asynchronous operations easier to write, it still requires us to be quite careful when it comes to managing the memory for the various tasks and objects that are involved in such asynchronous code.Implicit capturesOne interesting aspect of async/await (and the Task type that we need to use to wrap such code when <a href="https://www.swiftbysundell.com/articles/connecting-async-await-with-other-swift-code/#calling-async-functions-from-a-synchronous-context">calling it from a synchronous context</a>) is how objects and values often end up being <em>implicitly captured</em> while our asynchronous code is being executed.For example, let’s say that we’re working on a DocumentViewController, which downloads and displays a Document that was downloaded from a given URL. To make our download execute lazily when our view controller is about to be displayed to the user, we’re starting that operation within our view controller’s viewWillAppear method, and we’re then either rendering the downloaded document once available, or showing any error that was encountered — like this:<pre class="splash">class DocumentViewController: UIViewController { private let documentURL: URL private let urlSession: URLSession ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) Task { do { let (data, _) = try await urlSession.data(from: documentURL) let decoder = JSONDecoder() let document = try decoder.decode(Document.self, from: data) renderDocument(document) } catch { showErrorView(for: error) } } } private func renderDocument(_ document: Document) { ... } private func showErrorView(for error: Error) { ... }}</pre>Now, if we just quickly look at the above code, it might not seem like there’s any object capturing going on whatsoever. After all, asynchronous capturing has traditionally only happened within <a href="https://www.swiftbysundell.com/articles/capturing-objects-in-swift-closures">escaping closures</a>, which in turn require us to always explicitly refer to self whenever we’re accessing a local property or method within such a closure (when self refers to a class instance, that is).So we might expect that if we start displaying our DocumentViewController, but then navigate away from it before its download has completed, that it’ll be successfully deallocated once no external code (such as its parent UINavigationController) maintains a strong reference to it. But that’s actually not the case.That’s because of the aforementioned <em>implicit capturing</em> that happens whenever we create a Task, or use await to wait for the result of an asynchronous call. Any object used within a Task will automatically be retained until that task has finished (or failed), including self whenever we’re referencing any of its members, like we’re doing above.In many cases, this behavior might not actually be a problem, and will likely not lead to any actual <em>memory leaks</em>, since all captured objects will eventually be released once their capturing task has completed. However, let’s say that we’re expecting the documents downloaded by our DocumentViewController to potentially be quite large, and that we wouldn’t want multiple view controllers (and their download operations) to remain in memory if the user quickly navigates between different screens.The classic way to address this sort of problem would be to perform a weak self capture, which is often accompanied by a guard let self expression within the capturing closure itself — in order to turn that weak reference into a strong one that can then be used within the closure’s code:<pre class="splash">class DocumentViewController: UIViewController { ... override func viewWillAppear(_ animated: Bool)…
Swift by Sundell
Connecting async/await to other Swift code | Swift by Sundell
Let’s explore a few ways to “bridge the gap” between the new world of async/await and other kinds of asynchronous Swift code.
One of the core strengths of Swift’s protocols is that they enable us to define shared interfaces that multiple types can conform to, which in turn lets us interact with those types in a very uniform way, without necessarily knowing what underlying type that we’re currently dealing with.For example, to clearly define an API that enables us persist a given instance onto disk, we might choose to use a protocol that looks something like this:<pre class="splash">protocol DiskWritable { func writeToDisk(at url: URL) throws}</pre>One advantage of defining commonly used APIs that way is that it helps us keep our code consistent, as we can now make any type that should be disk-writable conform to the above protocol, which then requires us to implement the exact same method for all such types.Another big advantage of Swift protocols is that they’re extendable, which makes it possible for us to define all sorts of convenience APIs for both our own protocols, as well as those that are defined externally — for example within the standard library, or within any framework that we’ve imported.When writing those kinds of convenience APIs, we might also want to mix the protocol that we’re currently extending with some functionality provided by <em>another</em> protocol. For example, let’s say that we wanted to provide a default implementation of our DiskWritable protocol’s writeToDisk method for types that also conform to the Encodable protocol — since a type that’s encodable can be transformed into Data, which we could then automatically write to disk.One way to make that happen would be to make our DiskWritable protocol <em>inherit</em> from Encodable, which in turn will require all conforming types to implement both of those two protocols’ requirements. We could then simply extend DiskWritable in order to add that default implementation of writeToDisk that we were looking to provide:<pre class="splash">protocol DiskWritable: Encodable { func writeToDisk(at url: URL) throws}extension DiskWritable { func writeToDisk(at url: URL) throws { let encoder = JSONEncoder() let data = try encoder.encode(self) try data.write(to: url) }}</pre>While powerful, the above approach does have a quite significant downside, in that we’ve now completely coupled our DiskWritable protocol with Encodable — meaning that we can no longer use that protocol by itself, without also requiring any conforming type to also fully implement Encodable, which might become problematic.Another, much more flexible approach would be to let DiskWritable remain a completely stand-alone protocol, and instead write a type-constrained extension that only adds our default writeToDisk implementation to types that <em>also</em> conform to Encodable separately — like this:<pre class="splash">extension DiskWritable where Self: Encodable { func writeToDisk(at url: URL) throws { let encoder = JSONEncoder() let data = try encoder.encode(self) try data.write(to: url) }}</pre>The tradeoff here is that the above approach does require each type that wants to leverage our default writeToDisk implementation to explicitly conform to both DiskWritable and Encodable, which might not be a big deal, but it could make it a bit harder to discover that default implementation — since it’s no longer automatically available on all DiskWritable-conforming types.One way to address that discoverability issue, though, could be to create a convenience type alias (using Swift’s protocol composition operator, &) that gives us an indication that DiskWritable and Encodable can be combined to unlock new functionality:<pre class="splash">typealias DiskWritableByEncoding = DiskWritable & Encodable</pre>When a type conforms to those two protocols (either using the above type alias, or completely separately), it’ll now get access to our default writeToDisk implementation (while still having the option to provide its own, custom implementation as well):<pre class="splash">struct TodoList: DiskWritableByEncoding { var name: String var items: [Item] ...}let list = TodoList(...)try…
A major part of the challenge of architecting UI-focused code bases tends to come down to deciding where to draw the line between the code that needs to interact with the platform’s various UI frameworks, versus code that’s completely within our own app’s domain of logic.That task might become especially tricky when working with SwiftUI, as so much of our UI-centric logic tends to wind up within our various View declarations, which in turn often makes such code really difficult to verify using unit tests.So, in this article, let’s take a look at how we could deal with that problem, and explore how to make UI-related logic fully testable — even when that logic is primarily used within SwiftUI-based views.Logic intertwined with views“You shouldn’t put <em>business logic</em> within your views”, is a piece of advice that’s often mentioned when discussing unit testing within the context of UI-based projects, such as iOS and Mac apps. However, in practice, that advice can sometimes be tricky to follow, as the most natural or intuitive place to put view-related logic is often within the views themselves.As an example, let’s say that we’re working on an app that contains the following SendMessageView. Although the actual message sending logic (and its associated networking) has already been abstracted using a MessageSender protocol, all of the UI-specific logic related to sending messages is currently embedded right within our view:<pre class="splash">struct SendMessageView: View { var sender: MessageSender @State private var message = "" @State private var isSending = false @State private var sendingError: Error? var body: some View { VStack { Text("Your message:") TextEditor(text: $message) Button(isSending ? "Sending..." : "Send") { isSending = true sendingError = nil Task { do { try await sender.sendMessage(message) message = "" } catch { sendingError = error } isSending = false } } .disabled(isSending || message.isEmpty) if let error = sendingError { Text(error.localizedDescription) .foregroundColor(.red) } } }}</pre>At first glance, the above might not look so bad. Our view isn’t <em>massive</em> by any stretch of the imagination, and the code is quite well-organized. However, unit testing that view’s logic would currently be incredibly difficult — as we’d have to find some way to spin up our view within our tests, then find its various UI controls (such as its “Send” button), and then figure out a way to trigger and observe those views ourselves.Because we have to remember that SwiftUI views aren’t actual, concrete representations of the UI that we’re drawing on-screen, which can then be controlled and inspected as we wish. Instead, they’re ephemeral descriptions of what we want our various views to look like, which the system then renders and manages on our behalf.So, although we <em>could</em> most likely find a way to unit test our SwiftUI views directly — ideally, we’ll probably want to verify our logic in a much more controlled, isolated environment.One way to create such an isolated environment would be to extract all of the logic that we’re looking to test out from our views, and into objects and functions that are under our complete control — for example by using a <a href="https://www.swiftbysundell.com/articles/different-flavors-of-view-models-in-swift">view model</a>. Here’s what such a view model could end up looking like if we were to move all of our message sending UI logic out from our SendMessageView:<pre class="splash">@MainActor class SendMessageViewModel: ObservableObject { @Published var message = "" @Published private(set) var errorText: String? var buttonTitle: String { isSending ? "Sending..." : "Send" } var isSendingDisabled: Bool { isSending || message.isEmpty } private let sender: MessageSender private var isSending = false init(sender: MessageSender) { self.sender = sender } func send() { guard !message.isEmpty else { return } guard !isSending else { return } isSending = true errorText = nil Task { do { try await sender.sendMessage(message) message…
Swift by Sundell
Different flavors of view models in Swift | Swift by Sundell
View models attempt to make it easier to write and maintain view-specific logic, by introducing dedicated types for it. This week, let’s take a look at a few different ways that various flavors of view models can be implemented in Swift.
Auto linking is a feature that embeds information in your binaries' at compile time which is then used at link time to automatically link your dependencies. This allows you to reduce the duplication of flags between the different phases of your (or your consumers') builds.For example, with this Objective-C file:<pre class="highlight">#include <Foundation/Foundation.h>int main() { NSLog(@"Hello, World!"); return 1;}</pre>Compiled with:<pre class="highlight">$ clang -fmodules -c foo.m -o foo.o</pre>You can then inspect the options added for use at link time:<pre class="highlight">$ otool -l foo.o | grep LC_LINKER_OPTION -A3 cmd LC_LINKER_OPTION cmdsize 40 count 2 string #1 -framework string #2 Foundation...</pre>Now when linking this binary you don't have to pass any extra flags to the linker to make sure you link <code class="language-plaintext highlighter-rouge">Foundation:<pre class="highlight">$ ld foo.o -syslibroot `xcrun --show-sdk-path`</pre>To compare, if you compile the binary without <code class="language-plaintext highlighter-rouge">-fmodules<a href="#fn:1">1</a>:<pre class="highlight">$ clang -c foo.m -o foo.o</pre>You don't get any <code class="language-plaintext highlighter-rouge">LC_LINKER_OPTIONs. Then when linking the binary with the same command as before, it fails with these errors:<pre class="highlight">$ ld foo.o -syslibroot `xcrun --show-sdk-path`Undefined symbols for architecture arm64: "_NSLog", referenced from: _main in foo.o "___CFConstantStringClassReference", referenced from: CFString in foo.old: symbol(s) not found for architecture arm64</pre>To make it succeed you must explicitly link <code class="language-plaintext highlighter-rouge">Foundation through an argument to your linker invocation:<pre class="highlight">$ ld foo.o -syslibroot `xcrun --show-sdk-path` -framework Foundation</pre>Auto linking is also applied when using module maps that use the <code class="language-plaintext highlighter-rouge">link directive. For example with this module map file:<pre class="highlight">// module.modulemapmodule foo { link "foo" link framework "Foundation"}</pre>That you include with in this source file:<pre class="highlight">@import foo;int main() { return 1;}</pre>And compile (with an include path to the <code class="language-plaintext highlighter-rouge">module.modulemap file):<pre class="highlight">$ clang -fmodules -c foo.m -o foo.o -I.</pre>The produced object depends on <code class="language-plaintext highlighter-rouge">foo and <code class="language-plaintext highlighter-rouge">Foundation. This can be useful for handwriting module map files for prebuilt libraries, and for quite a few other cases. You can read about this file format in <a href="https://clang.llvm.org/docs/Modules.html">the docs</a>.You can also see auto linking with Swift code:<pre class="highlight">print("Hello, World!")</pre>Compiled with:<pre class="highlight">$ swiftc foo.swift -o foo.o -emit-object</pre>You can see it requires the Swift standard libraries:<pre class="highlight">$ otool -l foo.o | grep LC_LINKER_OPTION -A3 cmd LC_LINKER_OPTION cmdsize 24 count 1 string #1 -lswiftCore...</pre>For Swift this is especially useful since there are some underlying libraries like <code class="language-plaintext highlighter-rouge">libswiftSwiftOnoneSupport.dylib that need to be linked, but should be treated as implementation details that Swift developers are never exposed to.In general, this is more than you'll ever need to know about auto linking. But there are some situations where you might want to force binaries to include <code class="language-plaintext highlighter-rouge">LC_LINKER_OPTIONs when they don't automatically. For example, if your build system builds without <code class="language-plaintext highlighter-rouge">-fmodules (like bazel and cmake by default) and for some reason you cannot enable it<a href="#fn:1">1</a>, or when you're distributing a library and don't want your consumers to have to worry about adding extra linker flags.There are 2 different ways you can explicitly…