In fact, there are lots of times when having the predictability of a serial queue is important.Īs a simple example, your user might want to batch convert a collection of videos from one format to another. You might look at that and wonder why you even need serial queues – surely running one thing at a time is what we’re trying to avoid? Well, no. Either way work will start in the order it was added to the queue unless we specifically say something has a high or low priority. Sometimes the queues are serial, which means they remove one piece of work from the front of the queue and complete it before going onto the next piece of work and sometimes they are concurrent, which means they remove and execute multiple pieces of work at a time. Swift’s queues work exactly the same way: we create a queue and add work to it, and the system will remove and execute work from there in the order it was added. You might occasionally see stores trying to avoid the problem of one queue moving faster than another by having a single shared queue feed into multiple checkouts – there are all sorts of possible combinations. Some bigger stores might have lots of queues leading up to lots of checkouts, and small stores might just have one queue with one checkout. Queues work like they do in real life, where you might line up to buy something at a grocery store: you join the queue at the back, then move forward again and again until you’re at the front, at which point you can check out. When this happens a lot – when you create many more threads compared to the number of available CPU cores – the cost of context switching grows high, and so it has a suitably disastrous-sounding name: thread explosion.Īnd so, apart from that main thread of work that starts our whole program and manages the user interface, we normally prefer to think of our work in terms of queues. Swapping threads is known as a context switch, and it has a performance cost: the system must stash away all the data the thread was using and remember how far it had progressed in its work, before giving another thread the chance to run. Each thread you create needs to run somewhere, and if you accidentally end up creating 40 threads when you have only 4 CPU cores, the system will need to spend a lot of time just swapping them. It’s like an initiation rite, except it happens more often than I’d like to admit even after years of programming.Īlthough Swift lets us create threads whenever we want, this is uncommon because it creates a lot of complexity. This rule exists for all apps that run on iOS, macOS, tvOS, and watchOS, and even though it’s simple you will – will – forget it at some point in the future. Not some work some of the time, but all work all the time – if you try to update your UI from any other thread in your program you might find nothing happens, you might find your app crashes, or pretty much anywhere in between. This is important, because all your user interface work must take place on that main thread. Super simple command-line apps for macOS might only ever have that one thread, iOS apps will have many more to do all sorts of other jobs, but either way that initial thread – the one the app is first launched with – always exists for the lifetime of the app, and it’s always called the main thread. Every program launches with at least one thread where its work takes place, called the main thread.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |