asynchrony in c#5.0

Upload: sudu22

Post on 08-Apr-2018

236 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Asynchrony in C#5.0

    1/24

    synchrony in C# 5, Part One

    he designers of C# 2.0 realized that writing iterator logic was painful. So they added iterator blocks.iterator blocks.iterator blocks.iterator blocks. That way the

    ompiler could figure out how to build a state machine that could store the continuation - the what comes next -

    ate somewhere, hidden behind the scenes, so that you dont have to write that code.

    hey also realized that writing little methods that make use of local variables was painful. So they added anonymouanonymouanonymouanonymou

    ethodsethodsethodsethods. That way the compiler could figure out how to hoist the locals to a closure class, so that you don't have t

    rite that code.

    he designers of C# 3.0 realized that writing code that sorts, filters, joins, groups and summarizes complex data s

    as painful. So they added query comprehensiquery comprehensiquery comprehensiquery comprehensionsonsonsons and all the rest of the LINQ features. That way the compiler coul

    gure out how to do the right object model calls to build the query, the expression trees, and so on.

    he designers of C# 4.0 realized that interoperating with modern and legacy dynamic object models was painful. So

    hey added the dynamic type.dynamic type.dynamic type.dynamic type. That way the compiler could figure out how to generate the code at compile time tha

    oes the analysis in the Dynamic Language Runtime at runtime.

    he designers of C# 5.0 realized that writing asynchronous code is painful, in so many ways. Asynchronous code isard to reason about, and as we've seen, the transformation into a continuation is complex and leads to code reple

    ith mechanisms that obscure the meaning of the code.

    his shall not shis shall not shis shall not shis shall not stand.tand.tand.tand.

    am pleased to announce that there will be a C# 5.0 (*), and that in C# 5.0 youll be able to take this synchronous

    ode:

    oid ArchiveDocuments(List urls)

    for(int i = 0; i < urls.Count; ++i)Archive(Fetch(urls[i]));

    nd, given reasonable implementations of the FetchAsync and ArchiveAsync methods, transform it into this cod

    chieve the goal of sharing wait times as described yesterday:

    sync void ArchiveDocuments(List urls)

    Task archive = null;for(int i = 0; i < urls.Count; ++i){

    var document = await FetchAsync(urls[i]);if (archive != null)

    await archive;archive = ArchiveAsync(document);

    }

    here is the state machine code, the lambdas, the continuations, the checks to see if the task is already complete?

    heyre all still there. Let the compiler generate all that stuff for you,Let the compiler generate all that stuff for you,Let the compiler generate all that stuff for you,Let the compiler generate all that stuff for you, just like you let the compiler generate code fo

    erator blocks, closures, expression trees, query comprehensions and dynamic calls. The C# 5.0 code above is

    ssentially a syntactic sugar for the code I presented yesterday. That's a pretty sweet sugar!

    want to be quite clear on this point: the action of the code above will be logically the same as the action of yesterdthe action of the code above will be logically the same as the action of yesterdthe action of the code above will be logically the same as the action of yesterdthe action of the code above will be logically the same as the action of yesterd

    odeodeodeode. Whenever a task is "awaited", the remainder of the current method is signed up as a continuation of the task,

  • 8/7/2019 Asynchrony in C#5.0

    2/24

    hen control immediately returns to the caller. When the task completes, the continuation is invoked and the metho

    arts up where it was before.

    Ive timed the posting of this article correctly then Anders is announcing this new language feature at the PDC rig

    bout now. You can watch it here.

    e are as of right now making a CommunCommunCommunCommunity Technology Preview editionity Technology Preview editionity Technology Preview editionity Technology Preview edition of the prototype C# 5.0 compiler available

    he prototype compiler will be of prototype quality; expect features to be rough around the edges still. The idea is

    ve you a chance to try it and see what you think so that we can get early feedback.

    l be posting more about this feature tomorrow and for quite some time after that; if you can't wait, or want to get

    our hands on the prototype, you can get lots more information and fresh tasty compiler bits from

    sdn.com/vstudio/async.

    nd of course, in keeping with our co-evolution strategy, there will be a Visual Basic 11 and it will also feature task

    ased asynchrony. Check out the VB team blog for details, or read all about this feature at my colleague Lucian's bl

    ucian did much of the design and prototyping of this feature for both C# and VB; he, not I, is the expert on this so

    ou have deep questions, you might want to ask him, not me.)

    omorrowomorrowomorrowomorrow: await? async? Task? what about AsyncThingy? Tell me more!

    ) We are absolutely positively not announcing any datesdatesdatesdates or ship vehiship vehiship vehiship vehiclesclesclescles at this time, so don't even ask. Even if I k

    hich I don't, and even if my knowledge had the faintest chance of being accurate, which it doesn't, I still wouldn't

    ou.

  • 8/7/2019 Asynchrony in C#5.0

    3/24

    synchronous Programming in C# 5.0 part two: Whence await?

    want to start by being absolutely positively clear about two things, because our usability research has shown this

    e confusing. Remember our little program from last time?

    sync void ArchiveDocuments(List urls)

    Task archive = null;for(int i = 0; i < urls.Count; ++i){

    var document = await FetchAsync(urls[i]);if (archive != null)

    await archive;archive = ArchiveAsync(document);

    }

    he two things are:

    )))) The async modifier on the method does notnotnotnot mean this method is automatically scheduled to run on a worker

    hread asynchronously. It means the oppositeoppositeoppositeopposite of that; it means this method contains control flow that involves

    waiting asynchronous operations and will therefore be rewritten by the compiler into continuation passing style to

    nsure that the asynchronous operations can resume this method at the right spot. The whole point of async methThe whole point of async methThe whole point of async methThe whole point of async meth

    that you stay on the current thread as much as possible.that you stay on the current thread as much as possible.that you stay on the current thread as much as possible.that you stay on the current thread as much as possible. Theyre like coroutines: async methods bring single-

    hreaded cooperative multitasking to C#. (At a later date Ill discuss the reasons behind requiring the async modifie

    ather than inferring it.)

    ) The await operator used twice in that method does notnotnotnot mean this method now blocks the current thread until

    synchronous operation returns. That would be making the asynchronous operation back into a synchronous

    peration, which is precisely what we are attempting to avoid. Rather, it means the oppositeoppositeoppositeopposite of that; it means if th

    ask we are awaiting has not yet completed then sign up the rest of this method as the continuation of that task, an

    hen return to your caller immediately; the task will invoke the continuation when it completes.

    is unfortunate that peoples intuition upon first exposure regarding what the async and await contextual keyw

    ean is frequently the opposite of their actual meanings. Many attempts to come up with better keywords failed to

    nything better. If you have ideas for a keyword or combination of keywords that is short, snappy, and gets across

    orrect ideas, I am happy to hear them. Some ideas that we already had and rejected for various reasons were:

    ait for FetchAsync()ield with FetchAsync()ield FetchAsync()hile away the time FetchAsync()earken unto FetchAsync()or sooth Romeo wherefore art thou FetchAsync()

    oving on. Weve got a lot of ground to cover. The next thing I want to talk about is what exactly are those thingi

    hat I handwaved about last time?

    ast time I implied that the C# 5.0 expression

    ocument = await FetchAsync(urls[i])

    ets realized as:

    tate = State.AfterFetch;

    etchThingy = FetchAsync(urls[i]);

  • 8/7/2019 Asynchrony in C#5.0

    4/24

    f (fetchThingy.SetContinuation(archiveDocuments))return;fterFetch: ;ocument = fetchThingy.GetValue();

    hats the thingy?

    our model for asynchrony an asynchronous method typically returns a Task; lets assume for now that

    etchAsync returns a Task. (Again, Ill discuss the reasons behind this "Task-based Asynchrony Patte

    t a later date.) The actual code will be realized as:

    etchAwaiter = FetchAsync(urls[i]).GetAwaiter();tate = State.AfterFetch;f (fetchAwaiter.BeginAwait(archiveDocuments))return;fterFetch: ;ocument = fetchAwaiter.EndAwait();

    he call to FetchAsync creates and returns a Task - that is, an object which represents a hot running

    ask. Calling this method immediatelyreturns a Task which is then somehow asynchronously fetches t

    esired document. Perhaps it runs on another thread, or perhaps it posts itself to some Windows message queue o

    his thread that some message loop is polling for information about work that needs to be done in idle time, or

    hatever. Thats its business. What we know is that we need something to happen when it completes. (Again, Ill

    scuss single-threaded asynchrony at a later date.)

    o make something happen when it completes, we ask the task for an Awaiter, which exposes two methods.

    eginAwait signs up a continuation for this task; when the task completes, a miracle happens: somehow the

    ontinuation gets called. (Again, how exactly this is orchestrated is a subject for another day.) IfBeginAwait return

    ue then the continuation will be called; if not, then thats because the task has already completed and there is no

    eed to use the continuation mechanism.

    ndAwait extracts the result that was the result of the completed task.

    e will provide implementations ofBeginAwait and EndAwait on Task (for tasks that are logically void returning)

    ask (for tasks that return a value). But what about asynchronous methods that do not return a Task or Taskc.City=="London")

    nd overload resolution tries to find the best possible Where method by checking to see if customers implements smethod, or, if not, by going to extension methods. The GetAwaiter / BeginAwait / EndAwait pattern will be

    ame; well just do overload resolution on the transformed expression and see what it comes up with. If we need to

    o extension methods, we will.

    nally: why "Task"?

    he insight here is that asynchrony does not require parallelism, but parallelism does require asynchrony, and man

    he tools useful for parallelism can be used just as easily for non-parallel asynchrony. There is no inherent paralleli

    Task; that the Task Parallel Library uses a task-based pattern to represent units of pending work that can be

    arallelized does not requiremultithreading.

  • 8/7/2019 Asynchrony in C#5.0

    5/24

    s I've pointed out a few times, from the point of view of the code that is waiting for a result it really doesn't matte

    hether that result is being computed in idle time on this thread, in a worker thread in this process, in another pro

    n this machine, on a storage device, or on a machine halfway around the world. What matters is that it's going to t

    me to compute the result, and this CPU could be doing something else while it is waiting, if only we let it.

    he Task class from the TPL already has a lot of investment in it; it's got a cancellation mechanism and other usefu

    atures. Rather than invent some new thing, like some new "IFuture" type, we can just extend the existing task-ba

    ode to meet our asynchrony needs.

    ext time: How to further compose asynchronous tasks.

  • 8/7/2019 Asynchrony in C#5.0

    6/24

    synchrony in C# 5, Part Three: Composition

    was walking to my bus the other morning at about 6:45 AM. Just as I was about to turn onto 45th street, a young

    an, shirtless, covered in bloodcovered in bloodcovered in bloodcovered in blood ran down 45th at considerable speed right in front of me. Behind him was another

    llow, wielding a baseball bat. My initial thought was "holy goodness, I have to call the police right now!"

    hen I saw that the guy with the baseball bat was himself being chased by Count Dracula, a small horde of zombies

    and of pirates, one medieval knight, and bringing up the rear, a giant bumblebee. Apparently some Wallingford

    gging club got into the Hallowe'en spirit this past weekend.

    unrelated news: we've had lots of great comments and feedback on the async feature so far; please keep it cominplease keep it cominplease keep it comiplease keep it comin

    being read; it will take weeks to digest it and volume of correspondence may preclude personal replies, for which

    pologize, but that's how it goes.

    oday I want to talk a bit about composition of asynchronous code, why it is difficult in CPS, and how it is much ea

    ith "await" in C# 5.

    need a name for this thing. We named LINQLINQLINQLINQ because it was Language Integrated QueryLanguage Integrated QueryLanguage Integrated QueryLanguage Integrated Query. For now, let's provisionally

    his new feature TAPTAPTAPTAP, the Task Asynchrony Pattern

    Task Asynchrony PatternTask Asynchrony PatternTask Asynchrony Pattern. I'm sure we'll come up with a better name later; remember, this

    ill just a prototype. (*)

    he example I've been using so far, of fetching and archiving documents was obviously deliberately contrived to be

    mple orchestration of two asynchronous tasks in a void returning method. As we saw, when using regular

    ontinuation Passing Style it can be tricky to orchestrate even two asynchronous methods. Today I want to talk a b

    bout compositioncompositioncompositioncomposition of asynchronous methods. Our ArchiveDocuments method was void returning, which simplifies

    hings greatly. Suppose that ArchiveDocuments was to return a value, say, the total number of bytes archived.

    ynchronously, that's straightforward:

    ong ArchiveDocuments(List urls)

    long count = 0;for(int i = 0; i < urls.Count; ++i){

    var document = Fetch(urls[i]);count += document.Length;Archive(document);

    }return count;

    ow consider how we'd rewrite that asynchronously. If ArchiveDocuments is going to return immediately when the

    etchAsync starts up, and is only going to be resumed when the first fetch completes, then when is the "return cou

    oing to be executed? The naive asynchronous version of ArchiveDocuments cannot return a count; it has to be wr CPS too:

    oid ArchiveDocumentsAsync(List urls, Action continuation)

    // somehow do the archiving asynchronously,// then call the continuation

    nd now the taint has spread. Now the callerof ArchiveDocumentsAsync needs to be written in CPS, so that its

    ontinuation can be passed in. What if it in turn returns a result? This is going to become a mess; soon the entire

    rogram will be written upside down and inside out.

  • 8/7/2019 Asynchrony in C#5.0

    7/24

    the TAP model, instead we'd say that the type that represents asynchronous work that produces a result later is a

    ask. In C# 5 you can simply say:

    sync Task ArchiveDocumentsAsync(List urls)

    long count = 0;Task archive = null;for(int i = 0; i < urls.Count; ++i){

    var document = await FetchAsync(urls[i]);count += document.Length;

    if (archive != null)await archive;archive = ArchiveAsync(document);

    }return count;

    nd the compiler will take care of all the rewrites for you. It is instructive to understand exactly what happens here.

    his will get expanded into something like:

    ask ArchiveDocuments(List urls)

    var taskBuilder = AsyncMethodBuilder.Create();

    State state = State.Start;TaskAwaiter fetchAwaiter = null;TaskAwaiter archiveAwaiter = null;int i;long count = 0;Task archive = null;Document document;Action archiveDocuments = () =>{

    switch(state){

    case State.Start: goto Start;case State.AfterFetch: goto AfterFetch;case State.AfterArchive: goto AfterArchive;

    }

    Start:for(i = 0; i < urls.Count; ++i){

    fetchAwaiter = FetchAsync(urls[i]).GetAwaiter();state = State.AfterFetch;if (fetchAwaiter.BeginAwait(archiveDocuments))

    return;AfterFetch:document = fetchAwaiter.EndAwait();count += document.Length;if (archive != null){

    archiveAwaiter = archive.GetAwaiter();state = State.AfterArchive;if (archiveAwaiter.BeginAwait(archiveDocuments))return;

    AfterArchive:archiveAwaiter.EndAwait();

    }archive = ArchiveAsync(document);

    }taskBuilder.SetResult(count);return;

    };archiveDocuments();return taskBuilder.Task;

  • 8/7/2019 Asynchrony in C#5.0

    8/24

    Note that we still have problems with the labels being out of scope. Remember, the compilerdoesn't need to follow

    he rules of C# source code when it is generating code on your behalf; pretend the labels are in scope at the point o

    he goto. And note that we still have no exception handlingexception handlingexception handlingexception handling in here. As I discussed last week in my post on building

    xception handling in CPS, exceptions get a little bit weird because there are *two* continuations: the normal

    ontinuation and the error continuation. How do we deal with that situation? I'll discuss how exception handling wo

    TAP at a later date.)

    et me make sure the control flow is clear here. Let's first consider the trivial case: the list is empty. What happens?

    reate a task builder. We create a void-returning delegate. We invoke the delegate synchronously. It initializes the o

    ariable "count" to zero, branches to the "Start" label, skips the loop, tells the helper "you have a result", and return

    he delegate is now done. The taskbuilder is asked for a task; it knows that the task's work is completed, so it retu

    completedtask that simply represents the number zero.

    the caller attempts to await that task then its awaiter will return false when asked to begin async operations, beca

    he task has completed. If the caller does not await that task then... well, then they do whatever they do with a Task

    ventually they can ask it for its result, or ignore it if they don't care.

    ow let's consider the non-trivial case; there are multiple documents to archive. Again, we create a task builder and

    elegate which is invoked synchronously. First time through the loop, we begin an asynchronous fetch, sign up the

    elegate as its continuation, and return from the delegate. At that point the task builder builds a task that represen

    m asynchronously working on the body of ArchiveDocumentsAsync" and returns that task. When the fetch task

    ompletes asynchronously and invokes its continuation, the delegate starts up again "from the point where it left o

    hanks to the magic of the state machine. Everything proceeds exactly as before, in the case of the void returning

    ersion; the only difference is that the returned Task for ArchiveDocumentsAsync signals that it is complete

    voking its continuation) when the delegate tells the task builder to set the result.

    ake sense?

    efore I continue with some additional thoughts on composition of tasks, a quick note on the extensibextensibextensibextensibilityilityilityility of TAP. W

    esigned LINQ to be very extensible; any type that implements Select, Where, and so on, or has extension methods

    mplemented for them, can be used in query comprehensions. Similarly with TAP: any type that has a GetAwaiter th

    eturns a type that has BeginAwait, EndAwait, and so on, can be used in "await" expressions. However, methods ma

    s being async can only return void, Task, or Task for some T. We are all about enabling extensibility on

    onsumptiononsumptiononsumptiononsumption of existing asynchronous things, but have no desire to get in the business of enabling productionproductionproductionproduction of

    synchronous methods with exotic types. (The alert reader will have noted that I have not discussed extensibility po

    or the task buildertask buildertask buildertask builder. At a later date I'll discuss where the task builder comes from.)

    ontinuing on: (ha ha ha)

    LINQ there are some situations in which the use of "language" features like "where" clauses is more natural and s

    here using "fluent" syntax ("Where(c=>...)") is more natural. Similarly with TAP: our goal is to enable use of regu

    # syntax to compose and orchestrate asynchronous tasks, but sometimes you want to have a more "combinator"

    ased approach. To that end, we'll be making available methods with names like like "WhenAll" or "WhenAny" that

    ompose tasks like this:

    List groupsOfUrls = whatever;Task allResults = Task.WhenAll(from urls in groupsOfUrls select

    rchiveDocumentsAsync(urls));long[] results = await allResults;

  • 8/7/2019 Asynchrony in C#5.0

    9/24

    hat does this do? Well, ArchiveDocumentsAsync returns a Task, so the query returns an

    numerable. WhenAll takes a sequence of tasks and produces a new task which asynchronously aw

    ach of them, fills the result into an array, and then invokes its continuation with the results when available.

    milarly we'll have a WhenAny that takes a sequence of tasks and produces a new task that invokes its continuatio

    ith the first result when any of those tasks complete. (An interesting question is what happens if the first one

    ompletes successfully and the rest all throw an exception, but we'll talk about that later.)

    here will be other task combinators and related helper methods; see the CTP samples for some examples. Note th

    he CTP release we were unable to modify the existing Task class; instead we've provisionally added the new

    ombinators to TaskExTaskExTaskExTaskEx. In the final release they will almost certainly be moved onto Task.

    ext timeext timeext timeext time: No, seriously, asynchrony does notnecessarily involve multithreading.

    ) I emphasize that this is provisionalprovisionalprovisionalprovisional and for my own rhetorical purposes,for my own rhetorical purposes,for my own rhetorical purposes,for my own rhetorical purposes, and not an official name of anythingnot an official name of anythingnot an official name of anythingnot an official name of anything. Pl

    on't publish a book called "Essential TAP" or "Learn TAP in 21 Time Units" or whatever. I have a copy of "Instant

    HTML Scriptlets" on my bookshelf; Dino Esposito writes so fast that he published an entire book between the time

    entioned the code name of the product to him and we announced the real name. ("Windows Script Components" w

    he final name.)

  • 8/7/2019 Asynchrony in C#5.0

    10/24

    synchrony in C# 5.0 part Four: It's not magic

    oday I want to talk about asynchrony that does not involve any multithreading whatsoever.

    eople keep on asking me "but how is it possible to have asynchrony without multithreading?" A strange question t

    sk because you probably already know the answer. Let me turn the question around: how is it possible to have

    ultitasking without multiple CPUs? You can't do two things "at the same time" if there's only one thing doing the

    ork! But you already know the answer to that: multitasking on a single core simply means that the operating systeops one task, saves its continuation somewhere, switches to another task, runs it for a while, saves its continuati

    nd eventually switches back to continue the first task. Concurrencyis an illusion in a single-core system; it is not

    ase that two things are really happening at the same time. How is it possible for one waiter to serve two tables "at

    ame time"? It isn't: the tables take turns being served. A skillful waiter makes each guest feel like their needs are m

    mmediately by scheduling the tasks so that no one has to wait.

    synchrony without multithreading is the same idea. You do a task for a while, and when it yields control, you do

    nother task for a while on that thread. You hope that no one ever has to wait unacceptably long to be served.

    emember a while back I briefly sketched how early versions of Windows implemented multiple processes? Back in ay there was only one thread of control; each process ran for a while and then yielded control back to the operatin

    ystem. The operating system would then loop around the various processes, giving each one a chance to run. If on

    hem decided to hog the processor, then the others became non-responsive. It was an entirely cooperative venture

    o let's talk about multi-threading for a bit. Remember a while back, in 2003, I talked a bit about the apartment

    hreading model? The idea here is that writing thread-safe code is expensive and difficult; if you don't have to take

    hat expense, then don't. If we can guarantee that only "the UI thread" will call a particular control then that control

    oes not have to be safe for use on multiple threads. Most UI components are apartment threaded, and therefore th

    hread acts like Windows 3: everyone has to cooperate, otherwise the UI stops updating.

    surprising number of people have magical beliefs about how exactly applications respond to user inputs in Windo

    assure you that it is not magic. The way that interactive user interfaces are built in Windows is quite straightforwa

    hen something happens, say, a mouse click on a button, the operating system makes a note of it. At some point,

    rocess asks the operating system "did anything interesting happen recently?" and the operating system says "why

    omeone clicked this thing." The process then does whatever action is appropriate for that. What happens is up to

    rocess; it can choose to ignore the click, handle it in its own special way, or tell the operating system "go ahead an

    o whatever the default is for that kind of event." All this is typically driven by some of the simplest code you'll eve

    ee:

    hile(GetMessage(&msg, NULL, 0, 0) > 0)

    TranslateMessage(&msg);DispatchMessage(&msg);

    hat's it. Somewhere in the heart of every process that has a UI thread is a loop that looks remarkably like this one.

    all gets the next message. That message might be at too low a level for you; for example, it might say that a key w

    particular keyboard code number was pressed. You might want that translated into "the numlock key was pressed

    ranslateMessage does that. There might be some more specific procedure that deals with this message.

    ispatchMessage passes the message along to the appropriate procedure.

  • 8/7/2019 Asynchrony in C#5.0

    11/24

    want to emphasize that this is not magic. It's a while loop. It runs like any other while loop in C that you've ever se

    he loop repeatedly calls three methods, each of which reads or writes a buffer and takes some action before

    eturning. If one of those methods takes a long time to return (typically DispatchMessage is the long-running one o

    ourse since it is the one actually doing the work associated with the message) then guess what? The UI doesn't fet

    anslate or dispatch notifications from the operating system until such a time as it does return. (Or, unless some o

    ethod on the call chain is pumping the message queue, as Raymond points out in the linked article. We'll return t

    his point below.)

    et's take an even simpler version of our document archiving code from last time:

    oid FrobAll()

    for(int i = 0; i < 100; ++i)Frob(i);

    uppose you're running this code as the result of a button click, and a "someone is trying to resize the window"

    essage arrives in the operating system during the first call to Frob. What happens? Nothing, that's what. The mes

    ays in the queue until everything returns control back to that message loop. The message loop isn't running; how

    ould it be? It's just a while loop, and the thread that contains that code is busy Frobbing. The window does not res

    ntil all 100 Frobs are done.

    ow suppose you have

    sync void FrobAll()

    for(int i = 0; i < 100; ++i){

    await FrobAsync(i); // somehow get a started task for doing a Frob(i) operation on thhread

    }

    hat happens now?

    omeone clicks a button. The message for the click is queued up. The message loop dispatches the message and

    timately calls FrobAll.

    robAll creates a new task with an action.

    he task code sends a message to its own thread saying "hey, when you have a minute, call me". It then returns con

    o FrobAll.

    robAll creates an awaiter for the task and signs up a continuation for the task.

    ontrol then returns back to the message loop. The message loop sees that there is a message waiting for it: pleas

    all me back. So the message loop dispatches the message, and the task starts up the action. It does the first call t

    rob.

    ow, suppose another message, say, a resize event, occurs at this point. What happens? Nothing. The message loo

    n't running. We're busy Frobbing. The message goes in the queue, unprocessed.

    he first Frob completes and control returns to the task. It marks itself as completed and sends another message to

    essage queue: "when you have a minute, please call my continuation". (*)

  • 8/7/2019 Asynchrony in C#5.0

    12/24

    he task call is done. Control returns to the message loop. It sees that there is a pending window resize message. T

    then dispatched.

    ou see how async makes the UI more responsive without having any threads? Now you only have to wait for oneoneoneone F

    o finish, not for allallallall of them to finish, before the UI responds.

    hat might still not be good enough, of course. It might be the case that every Frob takes too long. To solve that

    roblem, you could make each call to Frob itself spawn short asynchronous tasks, so that there would be more

    pportunities for the message loop to run. Or, you really could start the task up on a new thread. (The tricky bit the

    ecomes posting the message to run the continuation of the task to the right message loop on the right thread; tha

    n advanced topic that I won't cover today.)

    nyway, the message loop dispatches the resize event and then checks its queue again, and sees that it has been a

    o call the continuation of the first task. It does so; control branches into the middle of FrobAll and we pick up goin

    round the loop again. The second time through, again we create a new task... and the cycle continues.

    he thing I want to emphasize here is that we stayed on one thread the whole time. All we're doing here is breaking

    he work into little pieces and sticking the work onto a queue; each piece of work sticks the nextpiece of work onto

    ueue. We rely on the fact that there's a message loop somewhere taking work off that queue and performing it.

    PDATEPDATEPDATEPDATE: A number of people have asked me "so does this mean that the Task Asynchrony Pattern only works on U

    hreads that have message loops?" No. The Task Parallel Library was explicitly designed to solve problems involving

    oncurrency; task asynchrony extends that work. There are mechanisms that allow asynchrony to work in

    ultithreaded environments without message loops that drive user interfaces, like ASP.NET. The intention of this a

    as to describe howasynchrony works on a UI thread without multithreading, not to say that asynchrony onlywork

    UI thread without multithreading. I'll talk at a later date about server scenarios where other kinds of "orchestratio

    ode works out which tasks run when.

    xtra bonus topicxtra bonus topicxtra bonus topicxtra bonus topic: Old hands at VB know that in order to get UI responsiveness you can use this trick:

    ub FrobAll()For i = 0 to 99

    Call Frob(i)DoEvents

    Nextnd Sub

    oes that do the same thing as the C# 5 async program above? Did VB6 actually support continuation passing styleDid VB6 actually support continuation passing styleDid VB6 actually support continuation passing styleDid VB6 actually support continuation passing style

    o; this is a much simpler trick. DoEvents does not transfer control back to the original message loop with some so

    f "resume here" message like the task awaiting does. Rather, it starts up a secondmessage loop (which, remembe

    st a perfectly normal while loop), clears out the backlog of pending messages, and then returns control back torobAll. Do you see why this is potentially dangerous?

    hat if we are in FrobAll as a result of a button click? And what if while frobbing, the user pressed the button again

    oEvents runs another message loop, which clears out the message queue, and now we are running FrobAll within

    robAll; it has become reentrant. And of course, it can happen again, and now we're running a third instance of

    robAll...

    f course, the same is true of task based asynchrony! If you start asynchronously frobbing due to a button click, an

    here is a second button click while more frobbing work is pending, then you get a second set of tasks created. To

    revent this it is probably a good idea to make FrobAll return a Task, and then do something like:

  • 8/7/2019 Asynchrony in C#5.0

    13/24

    sync void Button_OnClick(whatever)

    button.Disable();await FrobAll();button.Enable();

    o that the button cannot be clicked again while the asynchronous work is still pending.

    ext timeext timeext timeext time: Asynchrony is awesome but not a panacea: a real life story about how things can go terribly wrong.

    -----------

    ) Or, it invokes the continuation right then and there. Whether the continuation is invoked aggressively or is itself

    mply posted back as more work for the thread to do is user-configurable, but that is an advanced topic that I mig

    ot get to

  • 8/7/2019 Asynchrony in C#5.0

    14/24

  • 8/7/2019 Asynchrony in C#5.0

    15/24

    his sounds like a silly story, but we accidentally did something like this to ourselves just the other day. We built a

    nalyzer that uses task based asynchrony on the UI thread. The idea was that on every keystroke we would start an

    synchronous task that then itself started other asynchronous tasks. The tasks would be :

    ) Check to see if there's been a subsequent keystroke recently; if so, then cancel any unfinished tasks associated w

    his keystroke.

    ) Recolour the user interface text based on the differential analysis of the change to the syntax tree induced by th

    eystroke.

    ) Set a timer to see if there has been half a second of time with no keystrokes. If so, the user may have paused typ

    nd we can kick off a background worker to perform deeper program analysis.

    ee the problem? The tasks are created so fastso fastso fastso fast between keystrokes that if you're typing quickly soon you end up wi

    arehouse full of tens of thousands of tasks, 99.99% of which have not yet run in order to cancel themselves. And

    he ones that do manage to run are creating timers, the vast majority of which are going to be deleted before they

    ck. The garbage collection pressure from all those timers alone was enough to destroy performance. Asynchrony

    wesome, but you need to make sure that you get the granularity of the asynchrony at the appropriate level. The co

    nalyzer was rewritten to enqueue tasks that referred to a single, global timer, and to be less aggressive about

    nqueueing tasks that were highly likely to need cancellation a tenth of a second later. Performance improved

    rastically.

    ask-based asynchrony is awesome but it is not a panacea; be careful and measure as you go. The fact that there i

    ardly any waiting between the time you make the request and the time the asynchronous request enqueues itself

    he work queue means that there is hardly any restriction on the number of tasks you can create quickly.

    ext timeext timeext timeext time: More thoughts on syntactic concerns

  • 8/7/2019 Asynchrony in C#5.0

    16/24

    synchrony in C# 5 Part Six: Whither async?

    number of people have asked me what motivates the design decision to require any method that contains an "aw

    xpression to be prefixed with the contextual keyword "async".

    ke any design decision there are pros and cons here that have to be evaluated in the context of many different

    ompeting and incompossible principles. There's not going to be a slam-dunk solution here that meets every criter

    r delights everyone. We're always looking for an attainable compromise, not for unattainable perfection. This desig

    ecision is a good example of that.

    ne of our key principles is "avoid breaking changes whenever reasonably possible". Ideally it would be nice if every

    rogram that used to work in C# 1, 2, 3 and 4 worked in C# 5 as well. (*) As I mentioned a few episodes back, (**)

    hen adding a prefix operator there are many possible points of ambiguity and we want to eliminate all of them. W

    onsidered many heuristics that could make good guesses about whether a given "await" was intended as an identi

    ather than a keyword, and did not like any of them.

    he heuristics for "var" and "dynamic" were much easier because "var" is only special in a local variable declaration

    dynamic" is only special in a context in which a type is legal. "await" as a keyword is legal almost everywhere inside

    ethod body that an expression or type is legal, which greatly increases the number of points at which a reasonabeuristic has to be designed, implemented and tested. The heuristics discussed were subtlesubtlesubtlesubtle and complicatedcomplicatedcomplicatedcomplicated. For

    xample, var x = y + await; clearly should treat await as an identifer but should var x = await + y do the sa

    r is that an await of the unary plus operator applied to y? var x = await t; should treat await as a keyword; sho

    ar x = await(t); do the same, or is that a call to a method called await?

    equiring "async" means that we can eliminate all backwards compatibility problems at once; any method that cont

    n await expression must be "new construction" code, not "old work" code, because "old work" code never had an a

    odifier.

    n alternative approach that still avoids breaking changes is to use a two-word keyword for the await expression.

    hat's what we did with "yield return". We considered many two-word patterns; my favourite was "wait for". We reje

    ptions of the form "yield with", "yield wait" and so on because we felt that it would be too easily confused with the

    ubtly different continuation behaviour of iterator blocks. We have effectively trained people that "yield" logically m

    proffer up a value", rather than "cede flow of control back to the caller", though of course it means both! We reject

    ptions containing "return" and "continue" because they are too easily confused with those forms of control flow.

    ptions containing "while" are also problematic; beginner programmers occasionally ask whether a "while" loop is

    xited the moment that the condition becomes false, or if it keeps going until the bottom of the loop. You can see

    milar confusions could arise from use of "while" in asynchrony.

    f course "await" is problematic as well. Essentially the problem here is that there are two kinds of waiting. If you're

    waiting room at the hospital then you might wait by falling asleep until the doctor is available. Or, you might wait

    eading a magazine, balancing a chequebook, calling your mother, doing a crossword puzzle, or whatever. The poi

    ask-based asynchrony is to embrace the latter model of waiting: you want to keep getting stuff done on this threa

    hile you're waiting for your task to complete, rather than sleeping, so you wait by remembering what you were do

    nd then go do something else while you're waiting. I am hoping that the user education problem of clarifying whic

    nd of waiting we're talking about is not insurmountable.

    timately, whether it is "await" or not, the designers really wanted it to be a single-word feature. We anticipate tha

    his feature will potentially be used numerous times in a single method. Many iterator blocks contain only one or tw

  • 8/7/2019 Asynchrony in C#5.0

    17/24

    eld returns, but there could be dozens of awaits in code which orchestrates a complex asynchronous operation.

    aving a succinct operator is important.

    f course, you don't want it to be toosuccinct. F# uses "do!" and "let!" and so on for their asynchronous workflow

    perations. That! makes! the! code! look! exciting! but it is also a "secret code" that you have to know about to

    nderstand; it's not very discoverable. If you see "async" and "await" then at least you have some clue about what th

    eywords mean.

    nother principle is "be consistent with other language features". We're being pulled in two directions here. On the

    and, you don't have to say "iterator" before a method which contains an iterator block. (If we had, then "yield

    eturn x;" could have been just "yield x;".) This seems inconsistent with iterator blocks. On the other hand... le

    eturn to this point in a moment.

    nother principle we consider is the "principle of least surprise". More specifically, that small changes should not h

    urprising nonlocal results. Consider the following:

    oid Frob(Func f) { ... }..rob(()=> {

    if (whatever)

    { await something;return 123;

    }return 345;

    } );

    seems bizarre and confusing that commenting out the "await something;" changes the type inferred for X from

    ask to int. We do not want to add return type annotations to lambdas. Therefore, we'll probably go with

    equiring "async" on lambdas that contain "await":

    rob(async ()=> {if (whatever)

    {await something;return 123;

    }return 345;

    } );

    ow the type inferred for X is Task even if the await is commented out.

    hat is strong pressure towards requiring "async" on lambdas. Since we want language features to be consistent, a

    eems inconsistent to require "async" on anonymous functions but not on nominal methods, that is indirect pressu

    n requiring it on methods as well.

    nother example of a small change causing a big difference:

    ask Foo()

    await blah;return null;

    "async" is not required then this method with the "await" produces a non-null task whose result is set to null. If w

    omment out the "await" for testing purposes, say, then it produces a null task -- completely different. If we requir

    async" then the method returns the same thing both ways.

  • 8/7/2019 Asynchrony in C#5.0

    18/24

    nother design principle is that the stuff that comes before the body of a declared entity such as a method is all st

    hat is represented in the metadataof the entity. The name, return type, type parameters, formal parameters,

    ttributes, accessibility, static/instance/virtual/override/abstract/sealed-ness, and so on, are all part of the metada

    f the method. "async" and "partial" are not, which seems inconsistent. Put another way: "async" is solely about

    escribing the implementation detailsof the method; it has no impact on how the method is used. The caller cares

    bit whether a given method is marked as "async" or not, so why put it right there in the code where the person wr

    he caller is likely to read it? This is points against "async".

    n the other hand, another important design principle is that interesting code should call attention to itself. Code i

    ead a lot more than it is written. Async methods have a very different control flow than regular methods; it makes

    ense to call that out at the top where the code maintainer reads it immediately. Iterator blocks tend to be short; I d

    hink I've ever written an iterator block that does not fit on a page. It's pretty easy to glance at an iterator block and

    he yield. One imagines that async methods could be long and the 'await' could be buried somewhere not immediat

    bvious. It's nice that you can see at a glance from the header that this method acts like a coroutine.

    nother design principle that is important is "the language should be amenable to rich tools". Suppose we require

    async". What errors might a user make? A user might have an have a method with the async modifier which contai

    o awaits, believing that it will run on another thread. Or the user might write a method that does have awaits but

    orget to give the "async" modifier. In both cases we can write code analyzers that identify the problem and produc

    ch diagnostics that can teach the developer how to use the feature. A diagnostic could, for instance, remind you t

    n async method with no awaits does not run on another thread and give suggestions for how to achieve parallelism

    hat's really what you want. Or a diagnostic could tell you that an int-returning method containing an await should

    efactored (automatically, perhaps!) into an async method that returns Task. The diagnostic engine could also

    earch for all the callers of this method and give advice on whether they in turn should be made async. If "async" is

    equired then we cannot easily detect or diagnose these sorts of problems.

    hat's a whole lot of pros and cons; after evaluating all of them, and lots of playing around with the prototype com

    o see how it felt, the C# designers settled on requiring "async" on a method that contains an "await". I think that's

    easonable choice.

    reditsreditsreditsredits: Many thanks to my colleague Lucian for his insights and his excellent summary of the detailed design note

    hich were the basis of this episode.

    ext timeext timeext timeext time: I want to talk a bit about exceptions and then take a break from async/await for a while. A dozen posts

    he same topic in just a few weeks is a lot.

    ) We have violated this principle on numerous occasions, both (1) by accident, and (2) deliberately, when the bene

    as truly compelling and the rate of breakage was likely to be low. The famous example of the latter is F(G( C# 1 that means that F has two arguments, both comparison operators. In C# 2 that means F has one argument

    is a generic method of arity two.

    *) When I wrote that article I knew that we would be adding "await" as a prefix operator. It was an easy article to w

    ecause we had recently gone through the process of noodling on the specification to find the possible points of

    mbiguity. Of course I could not use "await" as the example back in September because we did not want to telegrap

    he new C# 5 feature, so I picked "frob" as nicely meaningless.

  • 8/7/2019 Asynchrony in C#5.0

    19/24

    synchrony in C# 5, Part Seven: Exceptions

    esuming where we left off (ha ha ha!) after that brief interruption: exception handling in "resumable" methods like

    oroutine-like asynchronous methods is more than a little bit weird. To get a sense of how weird it is, you might w

    o first refresh your memory of my recent series on the design of iterator blocks, particularly the post about the

    fference between a "push" model and a "pull" model. Briefly though:

    a regular code block, a try block surrounding a normal "synchronous" call site observes any exceptions that occu

    ithin the call:

    ry { Q(); }atch { ... }inally { ... }

    Q() throws an exception then the catch block runs; when control leaves Q() by regular or exceptional means, the

    nally block runs. Nothing unusual here.

    ow consider an iterator block that has been rewritten into a MoveNext method of an enumerator. When that thing

    alled it is called synchronously. If it throws an exception then the exception is handled by the nearest try-protecte

    egion on the call stack. But suppose the iterator block itself has a trya trya trya try----protected region that yields control back to protected region that yields control back to protected region that yields control back to protected region that yields control back to aller:aller:aller:aller:

    ry { yield return whatever; }atch { ... }inally { ... }

    he yield statement returns control back to the caller, but the return does notactivate the finally block. And if an

    xception is thrown in the callerthen the MoveNext() is no longer on the stack. Its exception handler has vanished.

    xception model of iterator blocks is pretty weird. The finally block only runs when control is in the MoveNext() me

    nd leaves the try-protected region by some mechanism other than yield return. Or when the enumerator is dispos

    arly, which can happen if the caller throws an exception that activates a finally block that disposes the enumerato

    hort: if the thing you've yielded control to has an exception that leaves the loop that is iterating the enumerator th

    hehehehe finallyfinallyfinallyfinallyblock of the iterator probably runs, but theblock of the iterator probably runs, but theblock of the iterator probably runs, but theblock of the iterator probably runs, but the catchcatchcatchcatchblock does notblock does notblock does notblock does not! Bizarre. That's why we made it illegalfo

    ou to yield in a try block that has a catch.

    o what on earth are we going to do for methods with "awaits" in them? The situation is like the situation with itera

    ocks, but even more bizarrebecause of course the asynchronous task can itself throw an exceptionthe asynchronous task can itself throw an exceptionthe asynchronous task can itself throw an exceptionthe asynchronous task can itself throw an exception:

    sync Task M()

    try { await DoSomethingAsync(); }catch { ... }

    finally { ... }

    e dowant it to be legal to await something in a try block that has a catch. Suppose DoSomethingAsync throws be

    returns a task. No problem there; M is still on the stack, so the catch block runs. Suppose DoSomethingAsync ret

    task. M signs up the rest of itself as the continuation of the task, and immediately returns anothertask to its calle

    hat happens when the job associated with the task returned by DoSomethingAsync is scheduled to run, and itthr

    n exception? Logicallywe want M to still be "on the stack" so that its catch and finally run, just like it would if

    oSomething had been a synchronous call. (Unlike iterator blocks: we want the catch to run, not just the finally!) Bu

    long gone; it has signed up a delegate that contains code that looks just like it as the continuation of a task, but

    nd its try block are vanished. The task might not even be running on the thread that M ran on. Heck, it might not

  • 8/7/2019 Asynchrony in C#5.0

    20/24

    e running on the same continentif the task is actually farmed out to some service provider "in the cloud". What do

    o?

    said a few episodes back that exception handling in continuation passing style is easy; you just pass around two

    ontinuations, one for the exceptional situation and one for the regular situation. That's not actually what we do he

    stead what we do is: if the job throws an otherwise-uncaught exception then it is caught and the exception is sto

    n the task. The task is then signaled as having completed unsuccessfully. When the continuation of the task resum

    e do a "goto" into the middle of the try block (somehow) and check to see if the task blew up. If it did, then we ca

    hrow the exception right there, and hey, this time there is a try-catch-finally that can handle the exception.

    ut suppose we do not handle the exception; maybe the catch block doesn't match. What do we do then? M's origin

    aller is again, long gone; the continuation is probably being called by some top-level message pump somewhere.

    hat do we do? Well, remember, M returned a taskM returned a taskM returned a taskM returned a task. We cache the exception againin thattask, and then signal that

    s having completed unsuccessfully. Thus the buck is passed to the caller, which is of course what exception throw

    all about: making your caller do the work of cleaning up your mess.

    short, M() is generated as something like this pseudo-C#:

    ask M()

    var builder = AsyncMethodBuilder.Create();var state = State.Begin;Action continuation = ()=>{

    try{

    if (state == State.AfterDoSomething) goto AfterDoSomething;try{

    var awaiter = DoSomethingAsync().GetAwaiter;state= State.AfterDoSomething;;if (awaiter.BeginAwait(continuation))return without running the finally;

    AfterDoSomething:

    awaiter.EndAwait(); // throws an exception if the task completed unsuccessfullybuilder.SetResult();return;

    }catch { ... }finally { ... }

    }catch (Exception exception){

    builder.SetException(exception); // signal this task as having completed unsuccessfullyreturn;

    }builder.SetResult();

    };continuation();return builder.Task;

    Of course there are problems here; you cannot do a goto into the middle of a try block, the label is out of scope, a

    o on. Ve have vays of making the compiler generate IL that works; it doesn't have to be legal C#. This is just a sket

    the EndAwait throws an exception cached from the asynchronous operation then the catch and finally blocks run

    ormally. If the inner catch block doesn't handle it, or throws another exception, then the outer catch block gets it,

    aches it in the task, and signals the task as having completed abnormally.

  • 8/7/2019 Asynchrony in C#5.0

    21/24

    have ignored several important cases in this brief sketch. For example, what if the method M is void returning? In

    tuation there is no task constructed for M, and so there is nothing to be signalled as completed unsuccessfully, a

    owhere to cache the exception. What if DoSomethingAsync does a WhenAll on ten sub-tasks and two of them thro

    n exception? What about the same scenario but with WhenAny?

    ext timeext timeext timeext time I'll talk a bit about these cases, muse about exception handling philosophy in general, and ask you wheth

    hat philosophy gives good guidance or not. Then we'll take a short break for American Thanksgiving, and then pic

    ith some topic otherthan asynchrony.

  • 8/7/2019 Asynchrony in C#5.0

    22/24

    synchrony in C# 5, Part Eight: More Exceptions

    n this post I'll be talking about exogenousexogenousexogenousexogenous, vexingvexingvexingvexing, boneheadedboneheadedboneheadedboneheaded and fatalfatalfatalfatal exceptions. See this post for a definitio

    hose terms.)

    your process experiences an unhandled exception then clearly something bad and unanticipated has happened. I

    fatal exception then you're already in no position to save the process; it isgoing down. You might as well leave it

    nhandled, or just log it and rethrow it. If it had been anticipated because it's a vexing or exogenous exception the

    here would be a handler in place for it. An unhandled vexing/exogenous exception is a bug, but probably one whioes not actually indicate a logic problem in the program's algorithms, it's just an oversight.

    ut if you have an unhandled boneheaded exception then that is evidence that your program has a very serious bug

    deed, a bug so bad that its operation cannot continue. The boneheaded exception should never have been throw

    he first place; you never handle them, you make for darn sure they cannot possibly happen. If a boneheaded excep

    thrown then you have no idea whatsoever what locks were released early, what internal state is now corrupt or

    consistent, and so on. You can't do anythingwith confidence, and often the best thing to do in that case is to

    ggressively shut down the process before things get any worse.

    e cannot easily tell the difference between bugs which are missing handlers for vexing/exogenous exceptions, anhich are bugs that have caused a program crash because something is broken in the implementation. The safest tThe safest tThe safest tThe safest t

    o do is to assume that every unhandleo do is to assume that every unhandleo do is to assume that every unhandleo do is to assume that every unhandled exception is either a fatal exception or an unhandled boneheaded exceptiod exception is either a fatal exception or an unhandled boneheaded exceptiod exception is either a fatal exception or an unhandled boneheaded exceptiod exception is either a fatal exception or an unhandled boneheaded exceptio

    both cases, the right thing to do is to take down the process immediately.

    his philosophy underlies the implementation of unhandled exceptions in the CLR. Way back in the CLR v1.0 days t

    olicy was that an unhandled exception on the "main" thread took down the process aggressively, but an unhandle

    xception on a "worker" thread simply killed the thread and left the main thread running. (And an exception on the

    nalizer thread was ignored and finalizers kept running.) This turned out to be a poor choice; the scenario it leads

    hat a server assigns a buggy subsystem to do some work on a bunch of worker threads; all the worker threads go

    own silently, and the user is stuck with a server that is sitting there waiting patiently for results that will never comecause all the threads that produce results have disappeared. It is very difficult for the user to diagnose such a

    roblem; a server that is working furiously on a hard problem and a server that is doing nothing because all its wor

    re dead look pretty much the same from the outside. The policy was therefore changed in CLR v2.0 such that an

    nhandled exception on a worker thread also takes down the process by default. You want to be noisy about your

    ailures, not silent.

    am of the philosophical school that says that sudden, catastrophic failure of a software device is, of course,

    nfortunate, but in many cases it is preferable that the software call attention to the problem so that it can be fixedit is preferable that the software call attention to the problem so that it can be fixedit is preferable that the software call attention to the problem so that it can be fixedit is preferable that the software call attention to the problem so that it can be fixed

    ather than trying to muddle along in a bad state, possibly introducing a security hole or corrupting user data along

    ay. Software that terminates itself upon encountering unexpected exceptions is software that is less vulnerable tottackers taking advantage of its flaws. As Ripley said, when things go wrong you shouldtake off and nuke the ent

    ite from orbit; it's the only way to be sure. But does this awesome philosophy serve the async scenario well?does this awesome philosophy serve the async scenario well?does this awesome philosophy serve the async scenario well?does this awesome philosophy serve the async scenario well?

    ast time I mentioned two interesting scenarios: (1) what happens if a task-returning async method does a WhenAl

    henAny on multiple tasks, several of which throw exceptions? and (2) what if a void-returning async method awai

    ask which completes abnormally? What happens to that exception?

    et's consider the first case first.

    henAll collects all the exceptions from its completed sub-tasks and stuffs them into an aggregating exception. W

    l its sub-tasks complete, it completes its task abnormally with the aggregated exception. A slightly bizarre fact,

  • 8/7/2019 Asynchrony in C#5.0

    23/24

    owever, is that by default, the EndAwait only re-throws the firstof those exceptions; it does not re-throw the enti

    ggregating exception. The more common scenario is for any try-catch surrounding an "await" to be catching some

    f specific exceptions; making you always write code that goes and unpacks the aggregating exception seems oner

    his may seem slightly odd; for more details on why this is a reasonable idea see Jon Skeet's recent posts on the to

    he WhenAny case is similar. Suppose the first sub-task completes, either normally or abnormally. That completes

    henAny task, either normally or abnormally. Suppose one of the additional sub-tasks completes abnormally; wha

    appens to its exception? The WhenAny is done: it has already completed and called its continuation, which is now

    cheduled to run on some work queue if it hasn't already.

    both the WhenAll and WhenAny cases we have a situation where there could be an exception that goes "unobserv

    y the creator of the WhenAll or WhenAny task. That is to say, in both these cases there could be an exception that

    hrown, automatically caught, cached and never thrown again which in the equivalent synchronous code would hav

    rought down the process.

    his seems potentially bad. Should an unobserved exception from a task that was asynchronously awaited take dowShould an unobserved exception from a task that was asynchronously awaited take dowShould an unobserved exception from a task that was asynchronously awaited take dowShould an unobserved exception from a task that was asynchronously awaited take dow

    he process, as the equivalent synchronous code would have?he process, as the equivalent synchronous code would have?he process, as the equivalent synchronous code would have?he process, as the equivalent synchronous code would have?

    uppose we decide that yes, an unobserved exception should take down the process. When does that happen? That

    hen do we definitively know that the exception actually was not rehen do we definitively know that the exception actually was not rehen do we definitively know that the exception actually was not rehen do we definitively know that the exception actually was not re----thrownthrownthrownthrown? We only know that if the task object is

    nalized without its result ever being observednalized without its result ever being observednalized without its result ever being observednalized without its result ever being observed. After all, a "living" task object that has completed abnormally could

    ave its continuation executed at any time in the future; it cannot know when that continuation is going to be

    cheduled. There could be any number of queued-up tasks on this thread that get to run between the time this tas

    ompleted abnormally and its result is requested. As long as the task object is alive then its exception could beAs long as the task object is alive then its exception could beAs long as the task object is alive then its exception could beAs long as the task object is alive then its exception could be

    bserved.bserved.bserved.bserved.

    K, so, great, if a task is finalized, and it completed abnormally then we... what? Throw the exception on the finaliz

    hread? Sure! That will take down the process, right? In CLR v2.0 and above, unhandled exceptions on any thread ta

    own the process. But let's take a step back. Remind me, why do we want an unobserved exception to take down th

    rocess? The philosophical reason is: we cannot tell whether this was a boneheaded exception that indicates a

    otentially horrible, security-impacting situation that needs to be dealt with by immediate termination, or simply th

    esult of a missing handler for an unanticipated exogenous exception. The safething to do is to say that it was a

    oneheaded exception with a security impact and immediately take the process down. Which is precisely what we aWhich is precisely what we aWhich is precisely what we aWhich is precisely what we a

    ot doing!ot doing!ot doing!ot doing! We are waiting for the task to be collected by the garbage collector and then trying to take the process d

    the finalizer thread. But in the gap between the exception being recorded in the task and the finalizer observing

    xception, we've potentially kept right on running dozens more tasks, any of which could be using the inconsistent

    ate caused by the boneheaded exception.

    urthermore, we anticipate that most async tasks that throw exceptions in realistic code will in fact be throwing

    xogenous exceptions like "the password for this web service is wrong" or "you don't have permission to read this f

    r "this operation timed out", rather than boneheaded exceptions like "you dereferenced null" or "you tried to pop a

    mpty stack". In these realistic cases it seems much more plausible to say that if for some reason a task completes

    bnormally and no one bothers to observe its result, it's because some asynchronous unit of work was abandoned;

    f its sub-tasks that ran into problems connecting to web servers (or whatever) can safely be ignored.

    short, an unobserved exception from a finalized task is one that no one cares aboutno one cares aboutno one cares aboutno one cares about, is probably harmlessprobably harmlessprobably harmlessprobably harmless, and i

    as harmful, then we've already delayed taking action too long to prevent more harmwe've already delayed taking action too long to prevent more harmwe've already delayed taking action too long to prevent more harmwe've already delayed taking action too long to prevent more harm. Either way, we might as well

    nore it.

  • 8/7/2019 Asynchrony in C#5.0

    24/24

    his does illustrate that asynchronous programming introduces a new flavour of security vulnerability. If there is a

    ecurity vulnerability caused by a bug that would normally take down the process, andandandand if that code is rewritten to b

    synchronous, andandandand if the buggy task is abandoned without observation of its exception, then the bug might not res

    an aggressive destruction of the now-vulnerable process. And even if the exception iseventually observed, there

    mightbe a window in time between when the bug introduces the vulnerability and the exception is observed. That

    indow might be large enough for an attacker to succeed. That sounds like a tortuous chain of things that have to

    rong - because it is - but attackers will take whatever they can get. They are crafty, they have all the time in the

    orld, and they only have to succeed once.

    never did say what happens to a void-returning method that awaits a task; you can think of this as a "fire and forg

    ort of method. Perhaps a void-returning button-click event handler awaits fetching some data asynchronously and

    hen updating the user interface; there's no "caller" of the event handler that cares to hold on to a task, and will nev

    bserve its result. So what happens if the data-fetching task completes abnormally?

    that case, when the void-returning method (which registered itself as a continuation, remember) starts up again

    hecks to see if the task completed abnormally. If it did, then it immediately re-throws the exception to its caller, w

    , of course, probably some message loop. I believe the plan of action here is to be consistent with the behaviour

    escribed above; in that scenario the message loop will discard the exception, assuming that the fire-and-forget

    synchronous method failed in some benign way.

    aving been an advocate of the "nuke from orbit" philosophy of unhandled exceptions for many years, emotionally

    oes not sit well with me, but I'm unable to marshal a convincing argument against this strategy for dealing with

    xceptions in task-based asynchrony. Readers: What do you think? What is in your opinion the right thing to do inReaders: What do you think? What is in your opinion the right thing to do inReaders: What do you think? What is in your opinion the right thing to do inReaders: What do you think? What is in your opinion the right thing to do in

    cenarios where exceptions of tasks go unobserved?cenarios where exceptions of tasks go unobserved?cenarios where exceptions of tasks go unobserved?cenarios where exceptions of tasks go unobserved?

    nd on that somewhat ominous note, I'm going to take a break from talking about the new Task Asynchrony Patter

    ow. Please download the CTP, keep sending us your feedback and questions, and start thinking about what sorts

    hings that will work well or work poorly with this new feature. Next time:Next time:Next time:Next time: we'll pick up with more fabulous adventu

    fter American Thanksgiving; I'm cooking turkey for 19 this year, which should be quite the adventure in of itself.