curio documentation - read the docs documentation, release 0.8 •a small and unusual object that is...

147
curio Documentation Release 0.9 David Beazley May 14, 2018

Upload: buibao

Post on 18-Mar-2018

232 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio DocumentationRelease 0.9

David Beazley

May 14, 2018

Page 2: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and
Page 3: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

Contents

1 Contents: 31.1 Curio - A Tutorial Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Curio How-To . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.3 Curio Reference Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311.4 Developing with Curio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

2 Installation: 125

3 An Example 127

4 Additional Features 129

5 The Big Question: Why? 131

6 Under the Covers 133

7 Questions and Answers 135

8 About 137

Python Module Index 139

i

Page 4: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

ii

Page 5: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

• a small and unusual object that is considered interesting or attractive

• A Python library for concurrent I/O and systems programming.

Curio is a library for performing concurrent I/O and common system programming tasks such as launching subpro-cesses and farming work out to thread and process pools. It uses Python coroutines and the explicit async/await syntaxintroduced in Python 3.5. Its programming model is based on cooperative multitasking and existing programmingabstractions such as threads, sockets, files, subprocesses, locks, and queues. You’ll find it to be small and fast.

Contents 1

Page 6: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

2 Contents

Page 7: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 1

Contents:

1.1 Curio - A Tutorial Introduction

Curio is a library for performing concurrent I/O using Python coroutines and the async/await syntax introduced inPython 3.5. Its programming model is based on existing system programming abstractions such as threads, sockets,files, locks, and queues. Under the hood, it’s based on a task model that provides for advanced handling of cancellation,interesting interactions between threads and processes, and much more. It’s fun.

This tutorial will take you through the basics of creating and managing tasks in curio as well as some useful debuggingfeatures.

1.1.1 A Small Taste

Curio allows you to write concurrent I/O handling code that looks a lot like threads. For example, here is a simpleecho server:

from curio import run, spawnfrom curio.socket import *

async def echo_server(address):sock = socket(AF_INET, SOCK_STREAM)sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind(address)sock.listen(5)print('Server listening at', address)async with sock:

while True:client, addr = await sock.accept()await spawn(echo_client, client, addr, daemon=True)

async def echo_client(client, addr):print('Connection from', addr)async with client:

3

Page 8: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

while True:data = await client.recv(1000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(echo_server, ('',25000))

This server can handle thousands of concurrent clients. It does not use threads. However, this example really doesn’tdo Curio justice. In the rest of this tutorial, we’ll start with the basics and work our way back to this before jumpingoff the deep end into something more advanced.

1.1.2 Getting Started

Here is a simple curio hello world program–a task that prints a simple countdown as you wait for your kid to put theirshoes on:

# hello.pyimport curio

async def countdown(n):while n > 0:

print('T-minus', n)await curio.sleep(1)n -= 1

if __name__ == '__main__':curio.run(countdown, 10)

Run it and you’ll see a countdown. Yes, some jolly fun to be sure. Curio is based around the idea of tasks. Tasks aredefined as coroutines using async functions. To make a task execute, it must run inside the curio kernel. The run()function starts the kernel with an initial task. The kernel runs until there are no more tasks to complete.

1.1.3 Tasks

Let’s add a few more tasks into the mix:

# hello.pyimport curio

async def countdown(n):while n > 0:

print('T-minus', n)await curio.sleep(1)n -= 1

async def kid():print('Building the Millenium Falcon in Minecraft')await curio.sleep(1000)

async def parent():kid_task = await curio.spawn(kid)await curio.sleep(5)

4 Chapter 1. Contents:

Page 9: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()

print("We're leaving!")await kid_task.join()print('Leaving')

if __name__ == '__main__':curio.run(parent)

This program illustrates the process of creating and joining with tasks. Here, the parent() task uses the curio.spawn() coroutine to launch a new child task. After sleeping briefly, it then launches the countdown() task. Thejoin() method is used to wait for a task to finish. In this example, the parent first joins with countdown() andthen with kid() before trying to leave. If you run this program, you’ll see it produce the following output:

bash % python3 hello.pyBuilding the Millenium Falcon in MinecraftLet's goT-minus 10T-minus 9T-minus 8T-minus 7T-minus 6T-minus 5T-minus 4T-minus 3T-minus 2T-minus 1We're leaving!.... hangs ....

At this point, the program appears hung. The child is busy for the next 1000 seconds, the parent is blocked on join()and nothing much seems to be happening–this is the mark of all good concurrent programs (hanging that is). Changethe last part of the program to run the kernel with the monitor enabled:

...if __name__ == '__main__':

curio.run(parent, with_monitor=True)

Run the program again. You’d really like to know what’s happening? Yes? Open up another terminal window andconnect to the monitor as follows:

bash % python3 -m curio.monitorCurio Monitor: 4 tasks runningType help for commandscurio >

See what’s happening by typing ps:

curio > psTask State Cycles Timeout Sleep Task------ ------------ ---------- ------- ------- ---------------------------------------→˓-----------1 FUTURE_WAIT 1 None None Monitor.monitor_task2 READ_WAIT 1 None None Kernel._run_coro.<locals>._kernel_task

1.1. Curio - A Tutorial Introduction 5

Page 10: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

3 TASK_JOIN 3 None None parent4 TIME_SLEEP 1 None 962.830 kidcurio >

In the monitor, you can see a list of the active tasks. You can see that the parent is waiting to join and that the kid issleeping. Actually, you’d like to know more about what’s happening. You can get the stack trace of any task using thewhere command:

curio > w 3Stack for Task(id=3, name='parent', state='TASK_JOIN') (most recent call last):File "hello.py", line 23, in parentawait kid_task.join()

curio > w 4Stack for Task(id=4, name='kid', state='TIME_SLEEP') (most recent call last):File "hello.py", line 12, in kidawait curio.sleep(1000)

curio >

Actually, that kid is just being super annoying. Let’s cancel their world:

curio > cancel 4Cancelling task 4

*** Connection closed by remote host ***

This causes the whole program to die with a rather nasty traceback message like this:

Traceback (most recent call last):File "/Users/beazley/Desktop/Projects/curio/curio/kernel.py", line 828, in _run_corotrap = current._throw(current.next_exc)

File "/Users/beazley/Desktop/Projects/curio/curio/task.py", line 95, in _task_runnerreturn await coro

File "hello.py", line 12, in kidawait curio.sleep(1000)

File "/Users/beazley/Desktop/Projects/curio/curio/task.py", line 440, in sleepreturn await _sleep(seconds, False)

File "/Users/beazley/Desktop/Projects/curio/curio/traps.py", line 80, in _sleepreturn (yield (_trap_sleep, clock, absolute))

curio.errors.TaskCancelled: TaskCancelled

The above exception was the direct cause of the following exception:

Traceback (most recent call last):File "hello.py", line 27, in <module>curio.run(parent, with_monitor=True, debug=())

File "/Users/beazley/Desktop/Projects/curio/curio/kernel.py", line 872, in runreturn kernel.run(corofunc, *args, timeout=timeout)

File "/Users/beazley/Desktop/Projects/curio/curio/kernel.py", line 212, in runraise ret_exc

File "/Users/beazley/Desktop/Projects/curio/curio/kernel.py", line 825, in _run_corotrap = current._send(current.next_value)

File "/Users/beazley/Desktop/Projects/curio/curio/task.py", line 95, in _task_runnerreturn await coro

File "hello.py", line 23, in parentawait kid_task.join()

File "/Users/beazley/Desktop/Projects/curio/curio/task.py", line 108, in joinraise TaskError('Task crash') from self.next_exc

curio.errors.TaskError: Task crash

6 Chapter 1. Contents:

Page 11: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Not surprisingly, the parent sure didn’t like having their child process abrubtly killed out of nowhere like that. Thejoin() method returned with a TaskError exception to indicate that some kind of problem occurred in the child.

Debugging is an important feature of curio and by using the monitor, you see what’s happening as tasks run. You canfind out where tasks are blocked and you can cancel any task that you want. However, it’s not necessary to do this inthe monitor. Change the parent task to include a timeout and some debugging print statements like this:

async def parent():kid_task = await curio.spawn(kid)await curio.sleep(5)

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()

print("We're leaving!")try:

await curio.timeout_after(10, kid_task.join)except curio.TaskTimeout:

print('Where are you???')print(kid_task.traceback())raise SystemExit()

print('Leaving!')

If you run this version, the parent will wait 10 seconds for the child to join. If not, a debugging traceback for the childtask is printed and the program quits. Use the traceback() method to see a traceback. Raising SystemExit()causes Curio to quit in the same manner as normal Python programs.

The parent could also elect to forcefully cancel the child. Change the program so that it looks like this:

async def parent():kid_task = await curio.spawn(kid)await curio.sleep(5)

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()

print("We're leaving!")try:

await curio.timeout_after(10, kid_task.join)except curio.TaskTimeout:

print('I warned you!')await kid_task.cancel()

print('Leaving!')

Of course, all is not lost in the child. If desired, they can catch the cancellation request and cleanup. For example:

async def kid():try:

print('Building the Millenium Falcon in Minecraft')await curio.sleep(1000)

except curio.CancelledError:print('Fine. Saving my work.')raise

Now your program should produce output like this:

1.1. Curio - A Tutorial Introduction 7

Page 12: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

bash % python3 hello.pyBuilding the Millenium Falcon in MinecraftLet's goT-minus 10T-minus 9T-minus 8T-minus 7T-minus 6T-minus 5T-minus 4T-minus 3T-minus 2T-minus 1We're leaving!I warned you!Fine. Saving my work.Leaving!

By now, you have the basic gist of the curio task model. You can create tasks, join tasks, and cancel tasks. Even if atask appears to be blocked for a long time, it can be cancelled by another task or a timeout. You have a lot of controlover the environment.

1.1.4 Task Groups

What kind of kid plays Minecraft alone? Of course, they’re going to invite all of their school friends over. Change thekid() function like this:

async def friend(name):print('Hi, my name is', name)print('Playing Minecraft')try:

await curio.sleep(1000)except curio.CancelledError:

print(name, 'going home')raise

async def kid():print('Building the Millenium Falcon in Minecraft')

async with curio.TaskGroup() as f:await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

await curio.sleep(1000)except curio.CancelledError:

print('Fine. Saving my work.')raise

In this code, the kid creates a task group and spawns a collection of tasks into it. Now you’ve got a four-fold problemof tasks sitting around doing nothing useful. You’d think the parent might have a problem with a motley crew like this,but no. If you run the code again, you’ll get output like this:

Building the Millenium Falcon in MinecraftHi, my name is MaxPlaying Minecraft

8 Chapter 1. Contents:

Page 13: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Hi, my name is LillianPlaying MinecraftHi, my name is ThomasPlaying MinecraftLet's goT-minus 10T-minus 9T-minus 8T-minus 7T-minus 6T-minus 5T-minus 4T-minus 3T-minus 2T-minus 1We're leaving!I warned you!Fine. Saving my work.Max going homeLillian going homeThomas going homeLeaving!

Carefully observe how all of those friends just magically went away. That’s the defining feature of a TaskGroup.You can spawn tasks into a group and they will either all complete or they’ll all get cancelled if any kind of error occurs.Either way, none of those tasks are executing when control-flow leaves the with-block. In this case, the cancellation ofchild() causes a cancellation to propagate to all of those friend tasks who promptly leave. Again, problem solved.

1.1.5 Task Synchronization

Although threads are not used to implement curio, you still might have to worry about task synchronization issues (e.g.,if more than one task is working with mutable state). For this purpose, curio provides Event, Lock, Semaphore,and Condition objects. For example, let’s introduce an event that makes the child wait for the parent’s permissionto start playing:

start_evt = curio.Event()

async def kid():print('Can I play?')await start_evt.wait()

print('Building the Millenium Falcon in Minecraft')

async with curio.TaskGroup() as f:await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

await curio.sleep(1000)except curio.CancelledError:

print('Fine. Saving my work.')raise

async def parent():kid_task = await curio.spawn(kid)await curio.sleep(5)

1.1. Curio - A Tutorial Introduction 9

Page 14: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print('Yes, go play')await start_evt.set()await curio.sleep(5)

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()

print("We're leaving!")try:

await curio.timeout_after(10, kid_task.join)except curio.TaskTimeout:

print('I warned you!')await kid_task.cancel()

print('Leaving!')

All of the synchronization primitives work the same way that they do in the threadingmodule. The main differenceis that all operations must be prefaced by await. Thus, to set an event you use await start_evt.set() andto wait for an event you use await start_evt.wait().

All of the synchronization methods also support timeouts. So, if the kid wanted to be rather annoying, they could usea timeout to repeatedly nag like this:

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')

async with curio.TaskGroup() as f:await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

await curio.sleep(1000)except curio.CancelledError:

print('Fine. Saving my work.')raise

1.1.6 Signals

What kind of screen-time obsessed helicopter parent lets their child and friends play Minecraft for a measly 5 seconds?Instead, let’s have the parent allow the child to play as much as they want until a Unix signal arrives, indicating thatit’s time to go. Modify the code to wait for Control-C or a SIGTERM using a SignalEvent like this:

import signal

async def parent():goodbye = curio.SignalEvent(signal.SIGINT, signal.SIGTERM)

10 Chapter 1. Contents:

Page 15: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

kid_task = await curio.spawn(kid)await curio.sleep(5)

print('Yes, go play')await start_evt.set()

await goodbye.wait()

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()print("We're leaving!")try:

await curio.timeout_after(10, kid_task.join)except curio.TaskTimeout:

print('I warned you!')await kid_task.cancel()

print('Leaving!')

If you run this program, you’ll get output like this:

Building the Millenium Falcon in MinecraftHi, my name is MaxPlaying MinecraftHi, my name is LillianPlaying MinecraftHi, my name is ThomasPlaying Minecraft

At this point, nothing is going to happen for awhile. The kids will play for the next 1000 seconds. However, if youpress Control-C, you’ll see the program initiate it’s usual shutdown sequence:

^C (Control-C)Let's goT-minus 10T-minus 9T-minus 8T-minus 7T-minus 6T-minus 5T-minus 4T-minus 3T-minus 2T-minus 1We're leaving!I warned you!Fine. Saving my work.Max going homeLillian going homeThomas going homeLeaving!

In either case, you’ll see the parent wake up, do the countdown and proceed to cancel the child. All the friends gohome. Very good.

Signals are a weird affair though. Suppose that the parent discovers that the house is on fire and wants to get the kidsout of there fast. As written, a SignalEvent captures the appropriate signal and sets a sticky flag. If the same signalcomes in again, nothing much happens. In this code, the shutdown sequence would run to completion no matter how

1.1. Curio - A Tutorial Introduction 11

Page 16: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

many times you hit Control-C. Everyone dies. Sadness.

This problem is easily solved–just delete the event after you’re done with it. Like this:

async def parent():goodbye = curio.SignalEvent(signal.SIGINT, signal.SIGTERM)

kid_task = await curio.spawn(kid)await curio.sleep(5)

print('Yes, go play')await start_evt.set()

await goodbye.wait()del goodbye # Removes the Control-C handler

print("Let's go")count_task = await curio.spawn(countdown, 10)await count_task.join()print("We're leaving!")try:

await curio.timeout_after(10, kid_task.join)except curio.TaskTimeout:

print('I warned you!')await kid_task.cancel()

print('Leaving!')

Run the program again. Now, quickly hit Control-C twice in a row. Boom! Minecraft dies instantly and everyonehurries their way out of there. You’ll see the friends, the child, and the parent all making a hasty exit.

1.1.7 Number Crunching and Blocking Operations

Now, suppose for a moment that the kid has discovered that the shape of the Millenium Falcon is based on the GoldenRatio and that building it now requires computing a sum of larger and larger Fibonacci numbers using an exponentialalgorithm like this:

def fib(n):if n < 2:

return 1else:

return fib(n-1) + fib(n-2)

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')async with curio.TaskGroup() as f:

await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

12 Chapter 1. Contents:

Page 17: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

total = 0for n in range(50):

total += fib(n)except curio.CancelledError:

print('Fine. Saving my work.')raise

If you run this version, you’ll find that the entire kernel becomes unresponsive. For example, signals aren’t caught andthere appears to be no way to get control back. The problem here is that the kid is hogging the CPU and never yields.Important lesson: curio DOES NOT provide preemptive scheduling. If a task decides to compute large Fibonaccinumbers or mine bitcoins, everything will block until it’s done. Don’t do that.

If you’re trying to debug a situation like this, the good news is that you can still use the Curio monitor to find outwhat’s happening. For example, you could start a separate terminal window and type this:

bash % python3 -m curio.monitor

Curio Monitor: 7 tasks runningType help for commandscurio > psTask State Cycles Timeout Sleep Task------ ------------ ---------- ------- ------- ---------------------------------------→˓-----------1 FUTURE_WAIT 1 None None Monitor.monitor_task2 READ_WAIT 1 None None Kernel._run_coro.<locals>._kernel_task3 FUTURE_WAIT 2 None None parent4 RUNNING 6 None None kid5 READY 0 None None friend6 READY 0 None None friend7 READY 0 None None friendcurio > w 4Stack for Task(id=4, name='kid', state='RUNNING') (most recent call last):File "hello.py", line 44, in kidtotal += fib(n)

curio > signal SIGKILL

*** Connection closed by remote host ***bash %

The bad news is that if you want other tasks to run, you’ll have to figure out some other way to carry out computa-tionally intensive work. If you know that the work might take awhile, you can have it execute in a separate process.Change the code to use curio.run_in_process() like this:

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')async with curio.TaskGroup() as f:

await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

1.1. Curio - A Tutorial Introduction 13

Page 18: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

total = 0for n in range(50):

total += await curio.run_in_process(fib, n)except curio.CancelledError:

print('Fine. Saving my work.')raise

In this version, the kernel remains fully responsive because the CPU intensive work is being carried out in a subprocess.You should be able to run the monitor, send the signal, and see the shutdown occur as before.

The problem of blocking might also apply to operations involving I/O. For example, suppose your kid starts hangingout with a bunch of savvy 5th graders who are into microservices. Suddenly, the kid() task morphs into somethingthat’s making HTTP requests and decoding JSON:

import requests

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')async with curio.TaskGroup() as f:

await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

total = 0for n in range(50):

r = requests.get(f'http://www.dabeaz.com/cgi-bin/fib.py?n={n}')resp = r.json()total += int(resp['value'])

except curio.CancelledError:print('Fine. Saving my work.')raise

That’s great except that the popular requests library knows nothing of Curio and it blocks the internal event loopwhile waiting for a response. This is essentially the same problem as before except that requests.get() mainlyspends its time waiting. For this, you can use curio.run_in_thread() to offload work to a separate thread.Modify the code like this:

import requests

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')async with curio.TaskGroup() as f:

14 Chapter 1. Contents:

Page 19: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

total = 0for n in range(50):

r = await curio.run_in_thread(requests.get,f'http://www.dabeaz.com/cgi-bin/fib.py?

→˓n={n}')resp = r.json()total += int(resp['value'])

except curio.CancelledError:print('Fine. Saving my work.')raise

You’ll find that this version works. All of the tasks run, you can send signals, and it’s responsive.

1.1.8 Curiouser and Curiouser

Like actual kids, as much as you tell tasks to be responsible, you can never be quite sure that they’re going to do theright thing in all circumstances. The previous section on blocking operations illustrates a problem that lurks in theshadows of any async program–-namely the lack of task preemption and the risk of blocking the interval event loopwithout even knowing it. Potentially any operation not involving an explict await is suspect. However, it’s really upto you to know more about the nature of what’s being done and to explicitly use calls such as run_in_thread()or run_in_process() as appropriate.

There is another approach however. Rewrite the fib() function and kid() task as follows:

@curio.async_threaddef fib(n):

r = requests.get(f'http://www.dabeaz.com/cgi-bin/fib.py?n={n}')resp = r.json()return int(resp['value'])

async def kid():while True:

try:print('Can I play?')await curio.timeout_after(1, start_evt.wait)break

except curio.TaskTimeout:print('Wha!?!')

print('Building the Millenium Falcon in Minecraft')async with curio.TaskGroup() as f:

await f.spawn(friend, 'Max')await f.spawn(friend, 'Lillian')await f.spawn(friend, 'Thomas')try:

total = 0for n in range(50):

total += await fib(n)except curio.CancelledError:

print('Fine. Saving my work.')raise

In this code, the kid() task uses await fib() to call the fib() function. It looks like you’re calling a coroutine,

1.1. Curio - A Tutorial Introduction 15

Page 20: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

but in reality, it’s launching a background thread and running the function in that. Since it’s a separate thread, blockingoperations aren’t going to block the rest of Curio. In fact, you’ll find that the example works the same as it did before.

Functions marked with @async_thread are also unusual in that they can be called from normal synchronous codeas well. For example, you could launch an interactive interpreter and do this:

>>> fib(5)8>>>

In this case, there is no need to launch a background thread–the function simply runs as it normally would.

There’s more than meets the eye when it comes to Curio and threads. However, Curio provides a number of featuresfor making coroutines and threads play nicely together. This is only a small taste.

1.1.9 A Simple Echo Server

Now that you’ve got the basics down, let’s look at some I/O. Perhaps the main use of Curio is in network programming.Here is a simple echo server written directly with sockets using curio:

from curio import run, spawnfrom curio.socket import *

async def echo_server(address):sock = socket(AF_INET, SOCK_STREAM)sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind(address)sock.listen(5)print('Server listening at', address)async with sock:

while True:client, addr = await sock.accept()await spawn(echo_client, client, addr, daemon=True)

async def echo_client(client, addr):print('Connection from', addr)async with client:

while True:data = await client.recv(1000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(echo_server, ('',25000))

Run this program and try connecting to it using a command such as nc or telnet. You’ll see the program echoingback data to you. Open up multiple connections and see that it handles multiple client connections perfectly well:

bash % nc localhost 25000Hello (you type)Hello (response)Is anyone there? (you type)Is anyone there? (response)^Cbash %

16 Chapter 1. Contents:

Page 21: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

If you’ve written a similar program using sockets and threads, you’ll find that this program looks nearly identicalexcept for the use of async and await. Any operation that involves I/O, blocking, or the services of Curio isprefaced by await.

Carefully notice that we are using the module curio.socket instead of the built-in socketmodule here. curio.socket is a wrapper around the existing socket module. All of the existing functionality of socket is available,but all of the operations that might block have been replaced by coroutines and must be preceded by an explicit await.

The use of an asynchronous context manager might be something new. For example, you’ll notice the code uses this:

async with sock:...

Normally, a context manager takes care of closing a socket when you’re done using it. The same thing happens here.However, because you’re operating in an environment of cooperative multitasking, you should use the asynchronousvariant instead. As a general rule, all I/O related operations in curio will use the async form.

A lot of the above code involving sockets is fairly repetitive. Instead of writing the part that sets up the server, you cansimplify the above example using tcp_server() like this:

from curio import run, spawn, tcp_server

async def echo_client(client, addr):print('Connection from', addr)while True:

data = await client.recv(1000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

The tcp_server() coroutine takes care of a few low-level details such as creating the server socket and binding itto an address. It also takes care of properly closing the client socket so you no longer need the extra async withclient statement from before. Clients are also launched into a proper task group so cancellation of the server shutseverything down just like the kid’s friends in the earlier example.

1.1.10 A Stream-Based Echo Server

In certain cases, it might be easier to work with a socket connection using a file-like stream interface. Here is anexample:

from curio import run, spawn, tcp_server

async def echo_client(client, addr):print('Connection from', addr)s = client.as_stream()while True:

data = await s.read(1000)if not data:

breakawait s.write(data)

print('Connection closed')

1.1. Curio - A Tutorial Introduction 17

Page 22: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

The socket.as_stream() method can be used to wrap the socket in a file-like object for reading and writing. Onthis object, you would now use standard file methods such as read(), readline(), and write(). One featureof a stream is that you can easily read data line-by-line using an async for statement like this:

from curio import run, spawn, tcp_server

async def echo_client(client, addr):print('Connection from', addr)s = client.as_stream()async for line in s:

await s.write(line)print('Connection closed')

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

This is potentially useful if you’re writing code to read HTTP headers or some similar task.

1.1.11 A Managed Echo Server

Let’s make a slightly more sophisticated echo server that responds to a Unix signal and gracefully restarts:

import signalfrom curio import run, spawn, SignalQueue, CancelledError, tcp_serverfrom curio.socket import *

async def echo_client(client, addr):print('Connection from', addr)s = client.as_stream()try:

async for line in s:await s.write(line)

except CancelledError:await s.write(b'SERVER IS GOING AWAY!\n')raise

print('Connection closed')

async def main(host, port):async with SignalQueue(signal.SIGHUP) as restart:

while True:print('Starting the server')serv_task = await spawn(tcp_server, host, port, echo_client)await restart.get()print('Server shutting down')await serv_task.cancel()

if __name__ == '__main__':run(main('', 25000))

In this code, the main() coroutine launches the server, but then waits for the arrival of a SIGHUP signal. Whenreceived, it cancels the server. Behinds the scenes, the server has spawned all children into a task group, all activechildren also get cancelled and print a “server is going away” message back to their clients. Just to be clear, if there

18 Chapter 1. Contents:

Page 23: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

were a 1000 connected clients at the time the restart occurs, the server would drop all 1000 clients at once and startfresh with no active connections.

The use of a SignalQueue here is useful if you want to respond to a signal more than once. Instead of merelysetting a flag like an event, each occurrence of a signal is queued. Use the get() method to get the signals as theyarrive.

1.1.12 Intertask Communication

If you have multiple tasks and want them to communicate, use a Queue. For example, here’s a program that builds alittle publish-subscribe service out of a queue, a dispatcher task, and publish function:

from curio import run, TaskGroup, Queue, sleep

messages = Queue()subscribers = set()

# Dispatch task that forwards incoming messages to subscribersasync def dispatcher():

async for msg in messages:for q in list(subscribers):

await q.put(msg)

# Publish a messageasync def publish(msg):

await messages.put(msg)

# A sample subscriber taskasync def subscriber(name):

queue = Queue()subscribers.add(queue)try:

async for msg in queue:print(name, 'got', msg)

finally:subscribers.discard(queue)

# A sample producer taskasync def producer():

for i in range(10):await publish(i)await sleep(0.1)

async def main():async with TaskGroup() as g:

await g.spawn(dispatcher)await g.spawn(subscriber, 'child1')await g.spawn(subscriber, 'child2')await g.spawn(subscriber, 'child3')ptask = await g.spawn(producer)await ptask.join()await g.cancel_remaining()

if __name__ == '__main__':run(main)

Curio provides the same synchronization primitives as found in the built-in threadingmodule. The same techniques

1.1. Curio - A Tutorial Introduction 19

Page 24: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

used by threads can be used with curio. All things equal though, prefer to use queues if you can.

1.1.13 A Chat Server

Let’s put more of our tools into practice and implement a chat server. This server combines a bit of network program-ming with the publish-subscribe system you just built. Here it is:

# chat.py

import signalfrom curio import run, spawn, SignalQueue, TaskGroup, Queue, tcp_server,→˓CancelledErrorfrom curio.socket import *

messages = Queue()subscribers = set()

async def dispatcher():async for msg in messages:

for q in subscribers:await q.put(msg)

async def publish(msg):await messages.put(msg)

# Task that writes chat messages to clientsasync def outgoing(client_stream):

queue = Queue()try:

subscribers.add(queue)async for name, msg in queue:

await client_stream.write(name + b':' + msg)finally:

subscribers.discard(queue)

# Task that reads chat messages and publishes themasync def incoming(client_stream, name):

try:async for line in client_stream:

await publish((name, line))except CancelledError:

await client_stream.write(b'SERVER IS GOING AWAY!\n')raise

# Supervisor task for each connectionasync def chat_handler(client, addr):

print('Connection from', addr)async with client:

client_stream = client.as_stream()await client_stream.write(b'Your name: ')name = (await client_stream.readline()).strip()await publish((name, b'joined\n'))

async with TaskGroup(wait=any) as workers:await workers.spawn(outgoing, client_stream)await workers.spawn(incoming, client_stream, name)

20 Chapter 1. Contents:

Page 25: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await publish((name, b'has gone away\n'))

print('Connection closed')

async def chat_server(host, port):async with TaskGroup() as g:

await g.spawn(dispatcher)await g.spawn(tcp_server, host, port, chat_handler)

async def main(host, port):async with SignalQueue(signal.SIGHUP) as restart:

while True:print('Starting the server')serv_task = await spawn(chat_server, host, port)await restart.get()print('Server shutting down')await serv_task.cancel()

if __name__ == '__main__':run(main('', 25000))

This code might take a bit to digest, but here are some important bits. Each connection results into two tasks be-ing spawned (incoming and outgoing). The incoming task reads incoming lines and publishes them. Theoutgoing task subscribes to the feed and sends outgoing messages. The workers task group supervises these twotasks. If any one of them terminates, the other task is cancelled right away.

The chat_server task launches both the dispatcher and a tcp_server task and watches them. If cancelled,both of those tasks will be shut down. This includes all active client connections (each of which will get a “server isgoing away” message).

Spend some time to play with this code. Allow clients to come and go. Send the server a SIGHUP and watch it dropall of its clients. It’s neat.

1.1.14 Programming Advice

At this point, you should have enough of the core concepts to get going. Here are a few programming tips to keep inmind:

• When writing code, think thread programming and synchronous code. Tasks execute like threads and wouldneed to be synchronized in much the same way. However, unlike threads, tasks can only be preempted onstatements that explicitly use await or async.

• Curio uses the same I/O abstractions that you would use in normal synchronous code (e.g., sockets, files, etc.).Methods have the same names and perform the same functions. However, all operations that potentially involveI/O or blocking will always be prefaced by an explicit await keyword.

• Be extra wary of any library calls that do not use an explicit await. Although these calls will work, they couldpotentially block the kernel on I/O or long-running calculations. If you know that either of these are possible,consider the use of the run_in_process() or run_in_thread() functions to execute the work.

1.1.15 Debugging Tips

A common programming mistake is to forget to use await. For example:

async def countdown(n):while n > 0:

1.1. Curio - A Tutorial Introduction 21

Page 26: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print('T-minus', n)curio.sleep(5) # Missing awaitn -= 1

This will usually result in a warning message:

example.py:8: RuntimeWarning: coroutine 'sleep' was never awaited

For debugging a program that is otherwise running, but you’re not exactly sure what it might be doing (perhaps it’shung or deadlocked), consider the use of the curio monitor. For example:

import curio...run(..., with_monitor=True)

The monitor can show you the state of each task and you can get stack traces. Remember that you enter the monitorby running python3 -m curio.monitor in a separate window.

You can also turn on scheduler tracing with code like this:

from curio.debug import schedtraceimport logginglogging.basicConfig(level=logging.DEBUG)run(..., debug=schedtrace)

This will write log information about the scheduling of tasks. If you want even more fine-grained information, youcan enable trap tracing using this:

from curio.debug import traptraceimport logginglogging.basicConfig(level=logging.DEBUG)run(..., debug=traptrace)

This will write a log of every low-level operation being performed by the kernel.

1.1.16 More Information

The official Github page at https://github.com/dabeaz/curio should be used for bug reports, pull requests, and otheractivities.

A reference manual can be found at https://curio.readthedocs.io/en/latest/reference.html.

A more detailed developer’s guide can be found at https://curio.readthedocs.io/en/latest/devel.html.

See the HowTo guide at https://curio.readthedocs.io/en/latest/howto.html for more tips and techniques.

1.2 Curio How-To

This document provides some recipes for using Curio to perform common tasks.

1.2.1 How do you write a simple TCP server?

Here is an example of a simple TCP echo server:

22 Chapter 1. Contents:

Page 27: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

from curio import run, spawn, tcp_server

async def echo_client(client, addr):print('Connection from', addr)while True:

data = await client.recv(100000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

This server uses sockets directly. If you want to a use a file-like streams interface, use the as_stream() methodlike this:

from curio import run, spawn, tcp_server

async def echo_client(client, addr):print('Connection from', addr)s = client.as_stream()while True:

data = await s.read(100000)if not data:

breakawait s.write(data)

print('Connection closed')

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

1.2.2 How do you write a UDP Server?

Here is an example of a simple UDP echo server using sockets:

import curiofrom curio import socket

async def udp_echo(addr):sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.bind(addr)while True:

data, addr = await sock.recvfrom(10000)print('Received from', addr, data)await sock.sendto(data, addr)

if __name__ == '__main__':curio.run(udp_echo, ('', 26000))

At this time, there are no high-level function (i.e., similar to tcp_server()) to run a UDP server.

1.2. Curio How-To 23

Page 28: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.2.3 How do you make an outgoing connection?

Curio provides some high-level functions for making outgoing connections. For example, here is a task that makes aconnection to www.python.org:

import curio

async def main():sock = await curio.open_connection('www.python.org', 80)async with sock:

await sock.sendall(b'GET / HTTP/1.0\r\nHost: www.python.org\r\n\r\n')chunks = []while True:

chunk = await sock.recv(10000)if not chunk:

breakchunks.append(chunk)

response = b''.join(chunks)print(response.decode('latin-1'))

if __name__ == '__main__':curio.run(main)

If you run this, you should get some output that looks similar to this:

HTTP/1.1 301 Moved PermanentlyServer: VarnishRetry-After: 0Location: https://www.python.org/Content-Length: 0Accept-Ranges: bytesDate: Fri, 30 Oct 2015 17:33:34 GMTVia: 1.1 varnishConnection: closeX-Served-By: cache-dfw1826-DFWX-Cache: HITX-Cache-Hits: 0Strict-Transport-Security: max-age=63072000; includeSubDomains

Ah, a redirect to HTTPS. Let’s make a connection with SSL applied to it:

import curio

async def main():sock = await curio.open_connection('www.python.org', 443,

ssl=True,server_hostname='www.python.org')

async with sock:await sock.sendall(b'GET / HTTP/1.0\r\nHost: www.python.org\r\n\r\n')chunks = []while True:

chunk = await sock.recv(10000)if not chunk:

breakchunks.append(chunk)

response = b''.join(chunks)

24 Chapter 1. Contents:

Page 29: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print(response.decode('latin-1'))

if __name__ == '__main__':curio.run(main)

It’s worth noting that the primary purpose of curio is merely concurrency and I/O. You can create sockets and you canapply things such as SSL to them. However, curio doesn’t implement any application-level protocols such as HTTP.Think of curio as a base-layer for doing that.

1.2.4 How do you write an SSL-enabled server?

Here’s an example of a server that speaks SSL:

import curiofrom curio import sslimport time

KEYFILE = 'privkey_rsa' # Private keyCERTFILE = 'certificate.crt' # Server certificate

async def handler(client, addr):client_f = client.as_stream()

# Read the HTTP requestasync for line in client_f:

line = line.strip()if not line:

breakprint(line)

# Send a responseawait client_f.write(

b'''HTTP/1.0 200 OK\rContent-type: text/plain\r\rIf you're seeing this, it probably worked. Yay!''')

await client_f.write(time.asctime().encode('ascii'))await client.close()

if __name__ == '__main__':ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)ssl_context.load_cert_chain(certfile=CERTFILE, keyfile=KEYFILE)curio.run(curio.tcp_server, '', 10000, handler, ssl=ssl_context)

The curio.ssl submodule is a wrapper around the sslmodule in the standard library. It has been modified slightlyso that functions responsible for wrapping sockets return a socket compatible with curio. Otherwise, you’d use it thesame way as the normal ssl module.

To test this out, point a browser at https://localhost:10000 and see if you get a readable response. Thebrowser might yell at you with some warnings about the certificate if it’s self-signed or misconfigured in some way.However, the example shows the basic steps involved in using SSL with curio.

1.2. Curio How-To 25

Page 30: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.2.5 How do you perform a blocking operation?

If you need to perform a blocking operation that runs outside of curio, use run_in_thread() to have it run in abacking thread. For example:

import timeimport curio

result = await curio.run_in_thread(time.sleep, 100)

1.2.6 How do you perform a CPU intensive operation?

If you need to run a CPU-intensive operation, you can either run it in a thread (see above) or have it run in a separateprocess. For example:

import curio

def fib(n):if n <= 2:

return 1else:

return fib(n-1) + fib(n-2)

...result = await curio.run_in_process(fib, 40)

Note: Since the operation in question runs in a separate interpreter, it should not involve any shared state. Make sureyou pass all required information in the function’s input arguments.

1.2.7 How do you apply a timeout?

You can make any curio operation timeout using timeout_after(seconds, coro). For example:

from curio import timeout_after, TaskTimeouttry:

result = await timeout_after(5, coro, args)except TaskTimeout:

print('Timed out')

Since wrapping a timeout in an exception is common, you can also use ignore_after() which returns Noneinstead. For example:

from curio import ignore_after

result = await ignore_after(5, coro, args)if result is None:

print('Timed out')

1.2.8 How can a timeout be applied to a block of statements?

Use the timeout_after() or ignore_after() functions as a context manager. For example:

26 Chapter 1. Contents:

Page 31: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

try:async with timeout_after(5):

statement1statement2...

except TaskTimeout:print('Timed out')

This is a cumulative timeout applied to the entire block. After the specified number of seconds has elapsed, aTaskTimeout exception will be raised in the current operation blocking in curio.

1.2.9 How do you shield operations from timeouts or cancellation?

To protect a block of statements from being aborted due to a timeout or cancellation, usedisable_cancellation() as a context manager like this:

async def func():...async with disable_cancellation():

await coro1()await coro2()...

await blocking_op() # Cancellation delivered here

1.2.10 How can tasks communicate?

Similar to threads, one of the easiest ways to communicate between tasks is to use a queue. For example:

import curio

async def producer(queue):for n in range(10):

await queue.put(n)await queue.join()print('Producer done')

async def consumer(queue):while True:

item = await queue.get()print('Consumer got', item)await queue.task_done()

async def main():q = curio.Queue()prod_task = await curio.spawn(producer, q)cons_task = await curio.spawn(consumer, q)await prod_task.join()await cons_task.cancel()

if __name__ == '__main__':curio.run(main)

1.2. Curio How-To 27

Page 32: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.2.11 How can a task and a thread communicate?

The most straightforward way to communicate between curio tasks and threads is to use curio’s UniversalQueueclass:

import curioimport threading

# A thread - standard pythondef producer(queue):

for n in range(10):queue.put(n)

queue.join()print('Producer done')

# A task - Curioasync def consumer(queue):

while True:item = await queue.get()print('Consumer got', item)await queue.task_done()

async def main():q = curio.UniversalQueue()prod_task = threading.Thread(target=producer, args=(q,))prod_task.start()cons_task = await curio.spawn(consumer, q)await curio.run_in_thread(prod_task.join)await cons_task.cancel()

if __name__ == '__main__':curio.run(main)

A UniversalQueue can be used by any combination of threads or curio tasks. The same API is used in both cases.However, when working with coroutines, queue operations must be prefaced by an await keyword.

1.2.12 How can coroutines and threads share a common lock?

A lock can be shared if the lock in question is one from the threading module and you use the curio abide()function. For example:

import threadingimport curio

lock = threading.Lock() # Must be a thread-lock

# Function running in a threaddef func():

...with lock:

critical_section...

# Coroutine running curioasync def coro():

...async with curio.abide(lock):

28 Chapter 1. Contents:

Page 33: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

critical_section...

curio.abide() adapts the given lock to work safely inside curio. If given a thread-lock, the various lockingoperations are executed in threads to avoid blocking other curio tasks.

1.2.13 How can synchronous code set an asynchronous event?

If you need to coordinate events between async and synchronous code, use a UniversalEvent object. For example:

from curio import UniversalEvent

evt = UniversalEvent()

def sync_func():...evt.set()

async def async_func():await evt.wait()...

A UniversalEvent allows setting and waiting in both synchronous and asynchronous code. You can flip the rolesaround as well:

def sync_func():evt.wait()...

async def async_func():...await evt.set()

Note: Waiting on an event in a synchronous function should take place in a separate thread to avoid blocking the kernelloop.

1.2.14 How do you run external commands in a subprocess?

Curio provides it’s own version of the subprocess module. Use the check_output() function as you would innormal Python code. For example:

from curio import subprocess

async def func():...out = await subprocess.check_output(['cmd','arg1','arg2','arg3'])...

The check_output() function takes the same arguments and raises the same exceptions as its standard librarycounterpart. The underlying implementation is built entirely using the async I/O primitives of curio. It’s fast and nobacking threads are used.

1.2. Curio How-To 29

Page 34: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.2.15 How can you communicate with a subprocess over a pipe?

Use the curio.subprocess module just like you would use the normal subprocess module. For example:

from curio import subprocess

async def func():...p = subprocess.Popen(['cmd', 'arg1', 'arg2', ...],

stdin=subprocess.PIPE,stdout=subprocess.PIPE)

await p.stdin.write(b'Some data')...resp = await p.stdout.read(maxsize)

In this example, the p.stdin and p.stdout streams are replaced by curio-compatible file streams. You use thesame I/O operations as before, but make sure you preface them with await.

1.2.16 How can two different Python interpreters send messages to each other?

Use a Curio Channel instance to set up a communication channel. For example, you could make a producer programlike this:

# producer.pyfrom curio import Channel, run

async def producer(ch):c = await ch.accept(authkey=b'peekaboo')for i in range(10):

await c.send(i) # Send some dataawait c.send(None)

if __name__ == '__main__':ch = Channel(('localhost', 30000))run(producer, ch)

Now, make a consumer program:

# consumer.pyfrom curio import Channel, run

async def consumer(ch):c = await ch.connect(authkey=b'peekaboo')while True:

msg = await c.recv()if msg is None:

breakprint('Got:', msg)

if __name__ == '__main__':ch = Channel(('localhost', 30000))run(consumer, ch)

Run each program separately and you should see messages received by the consumer program.

Channels allow arbitrary Python objects to be sent and received as messages as long as they are compatible withpickle.

30 Chapter 1. Contents:

Page 35: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.2.17 How does a coroutine get its enclosing Task instance?

Use the current_task() function like this:

from curio import current_task...async def func():

...myself = await current_task()...

Once you have a reference to the Task, it can be passed around and use in other operations. For example, a differenttask could use it to cancel.

1.3 Curio Reference Manual

This manual describes the basic concepts and functionality provided by curio.

1.3.1 Coroutines

Curio is solely concerned with the execution of coroutines. A coroutine is a function defined using async def. Forexample:

async def hello(name):return 'Hello ' + name

Coroutines call other coroutines using await. For example:

async def main(name):s = await hello(name)print(s)

Unlike a normal function, a coroutine can never run all on its own. It always has to execute under the supervision of amanager (e.g., an event-loop, a kernel, etc.). In curio, an initial coroutine is executed by a low-level kernel using therun() function. For example:

import curiocurio.run(main, 'Guido')

When executed by curio, a coroutine is considered to be a “Task.” Whenever the word “task” is used, it refers to theexecution of a coroutine.

1.3.2 The Kernel

All coroutines in curio are executed by an underlying kernel. Normally, you would run a top-level coroutine using thefollowing function:

run(corofunc, *args, debug=None, selector=None,with_monitor=False, **other_kernel_args)

Run the async function corofunc to completion and return its final return value. args are the arguments providedto corofunc. If with_monitor is True, then the monitor debugging task executes in the background. If selector isgiven, it should be an instance of a selector from the selectors module. debug is a list of optional debugging

1.3. Curio Reference Manual 31

Page 36: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

features. See the section on debugging for more detail. A RuntimeError is raised if run() is invoked morethan once from the same thread.

If you are going to repeatedly run coroutines one after the other, it will be more efficient to create a Kernel instanceand submit them using its run() method as described below:

Kernel(selector=None, debug=None):Create an instance of a curio kernel. The arguments are the same as described above for the run() function.

There is only one method that may be used on a Kernel outside of coroutines.

Kernel.run(corofunc=None, *args, shutdown=False)Runs the kernel until the supplied async function corofunc completes execution.The final result of this function,if supplied, is returned. args are the arguments given to corofunc. If shutdown is True, the kernel will cancelall remaining tasks and perform a clean shutdown. Calling this method with corofunc set to None causes thekernel to run through a single check for task activity before returning immediately. Raises a RuntimeError if atask is submitted to an already running kernel or if an attempt is made to run more than one kernel in a thread.

If submitting multiple tasks, one after another, from synchronous code, consider using a kernel as a context manager.For example:

with Kernel() as kernel:kernel.run(corofunc1)kernel.run(corofunc2)...

# Kernel shuts down here

When submitted a task to the Kernel, you can either provide an async function and calling arguments or you canprovide an instantiated coroutine. For example:

async def hello(name):print('hello', name)

run(hello, 'Guido') # Preferredrun(hello('Guido')) # Ok

This convention is observed by nearly all other functions that accept coroutines (e.g., spawning tasks, waiting fortimeouts, etc.). As a general rule, the first form of providing a function and arguments should be preferred. This formof calling is required for certain parts of Curio so you’re code will be more consistent if you use it.

1.3.3 Tasks

The following functions are defined to help manage the execution of tasks.

await spawn(corofunc, *args, daemon=False, report_crash=True)Create a new task that runs the async function corofunc. args are the arguments provided to corofunc. Returns aTask instance as a result. The daemon option, if supplied, specifies that the new task will never be joined andthat it’s result may be disregarded. The report_crash option specifies whether or not tasks that terminate due toan uncaught exception print a log message or not. The default behavior is True.

Note: spawn() creates a completely independent task. The resulting task is not placed into any kind of taskgroup as might be managed by TaskGroup instances described later.

await current_task()Returns a reference to the Task instance corresponding to the caller. A coroutine can use this to get a self-reference to its current Task instance if needed.

The spawn() and current_task() both return a Task instance that serves as a kind of wrapper around theunderlying coroutine that’s executing.

32 Chapter 1. Contents:

Page 37: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

class TaskA class representing an executing coroutine. This class cannot be created directly.

await Task.join()Wait for the task to terminate. Returns the value returned by the task or raises a curio.TaskError exceptionif the task failed with an exception. This is a chained exception. The __cause__ attribute of this exceptioncontains the actual exception raised by the task when it crashed. If called on a task that has been cancelled, the__cause__ attribute is set to curio.TaskCancelled.

await Task.wait()Like join() but doesn’t return any value. The caller must obtain the result of the task separately via theresult or exception attribute.

await Task.cancel(blocking=True)Cancels the task. This raises a curio.TaskCancelled exception in the task which may choose to handle itin order to perform cleanup actions. If blocking=True (the default), does not return until the task actuallyterminates. Curio only allows a task to be cancelled once. If this method is somehow invoked more than onceon a still running task, the second request will merely wait until the task is cancelled from the first request. Ifthe task has already run to completion, this method does nothing and returns immediately. Returns True if thetask was actually cancelled. False is returned if the task was already finished prior to the cancellation request.Cancelling a task also cancels any previously set timeout.

Task.traceback()Create a stack traceback string for the task. Useful for debugging.

Task.where()Return a (filename, lineno) tuple where the task is executing. For debugging.

The following public attributes are available of Task instances:

Task.idThe task’s integer id.

Task.coroThe underlying coroutine associated with the task.

Task.daemonBoolean flag that indicates whether or not a task is daemonic.

Task.stateThe name of the task’s current state. Printing it can be potentially useful for debugging.

Task.cyclesThe number of scheduling cycles the task has completed. This might be useful if you’re trying to figure out if atask is running or not. Or if you’re trying to monitor a task’s progress.

Task.resultThe result of a task, if completed. If accessed before the task terminated, a RuntimeError exception is raised.If the task crashed with an exception, that exception is reraised on access.

Task.exceptionException raised by a task, if any.

Task.cancelledA boolean flag that indicates whether or not the task was cancelled.

Task.terminatedA boolean flag that indicates whether or not the task has run to completion.

1.3. Curio Reference Manual 33

Page 38: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.3.4 Task Groups

Curio provides a mechanism for grouping tasks together and managing their execution. This includes cancelling tasksas a group, waiting for tasks to finish, or watching a group of tasks as they finish. To do this, create a TaskGroupinstance.

class TaskGroup(tasks=(), *, wait=all, name=None)A class representing a group of executing tasks. tasks is an optional set of existing tasks to put into the group.New tasks can later be added using the spawn() method below. wait specifies the policy used for waiting fortasks. See the join() method below. Each TaskGroup is an independent entity. Task groups do not form ahierarchy or any kind of relationship to other previously created task groups or tasks. Moreover, Tasks createdby the top level spawn() function are not placed into any task group. To create a task in a group, it should becreated using TaskGroup.spawn() or explicitly added using TaskGroup.add_task().

The following methods are supported on TaskGroup instances:

await TaskGroup.spawn(corofunc, *args, report_crash=True)Create a new task that’s part of the group. Returns a Task instance. The report_crash flag controls whether atraceback is logged when a task exits with an uncaught exception.

await TaskGroup.add_task(coro)Adds an already existing task to the task group.

await TaskGroup.next_done()Returns the next completed task. Returns None if no more tasks remain. A TaskGroup may also be used asan asynchronous iterator.

await TaskGroup.next_result()Returns the result of the next completed task. If the task failed with an exception, that exception is raised. ARuntimeError exception is raised if this is called when no remaining tasks are available.

await TaskGroup.join(*, wait=all)Wait for tasks in the group to terminate. If wait is all, then wait for all tasks to completee. If wait is anythen wait for any task to terminate and cancel any remaining tasks. If wait is object, then wait for any taskto terminate and return a non-None object, cancelling all remaining tasks afterwards. If any task returns with anerror, then all remaining tasks are immediately cancelled and a TaskGroupError exception is raised. If thejoin() operation itself is cancelled, all remaining tasks in the group are also cancelled. If a TaskGroup isused as a context manager, the join() method is called on context-exit.

await TaskGroup.cancel_remaining()Cancel all remaining tasks.

TaskGroup.completedThe first task that completed with a result in the group. Useful when used in combination with the wait=anyor wait=object options on join().

The preferred way to use a TaskGroup is as a context manager. For example, here is how you can create a group oftasks, wait for them to finish, and collect their results:

async with TaskGroup() as g:t1 = await g.spawn(func1)t2 = await g.spawn(func2)t3 = await g.spawn(func3)

# all tasks done hereprint('t1 got', t1.result)print('t2 got', t2.result)print('t3 got', t3.result)

Here is how you would launch tasks and collect their results in the order that they complete:

34 Chapter 1. Contents:

Page 39: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async with TaskGroup() as g:t1 = await g.spawn(func1)t2 = await g.spawn(func2)t3 = await g.spawn(func3)async for task in g:

print(task, 'completed.', task.result)

If you wanted to launch tasks and exit when the first one has returned a result, use the wait=any option like this:

async with TaskGroup(wait=any) as g:await g.spawn(func1)await g.spawn(func2)await g.spawn(func3)

result = g.completed.result # First completed task

If any exception is raised inside the task group context, all launched tasks are cancelled and the exception is reraised.For example:

try:async with TaskGroup() as g:

t1 = await g.spawn(func1)t2 = await g.spawn(func2)t3 = await g.spawn(func3)raise RuntimeError()

except RuntimeError:# All launched tasks will have terminated or been cancelledassert t1.terminatedassert t2.terminatedassert t3.terminated

This behavior also applies to features such as a timeout. For example:

try:async with timeout_after(10):

async with TaskGroup() as g:t1 = await g.spawn(func1)t2 = await g.spawn(func2)t3 = await g.spawn(func3)

# All tasks cancelled here on timeout

except TaskTimeout:# All launched tasks will have terminated or been cancelledassert t1.terminatedassert t2.terminatedassert t3.terminated

The timeout exception itself is only raised in the code that’s using the task group. Child tasks are cancelled using thecancel() method and would receive a TaskCancelled exception.

If any launched tasks exit with an exception other than TaskCancelled, a TaskGroupError exception is raised.For example:

async def bad1():raise ValueError('bad value')

async def bad2():

1.3. Curio Reference Manual 35

Page 40: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

raise RuntimeError('bad run')

try:async with TaskGroup() as g:

await g.spawn(bad1)await g.spawn(bad2)await sleep(1)

except TaskGroupError as e:print('Failed:', e.errors) # Print set of exception typesfor task in e:

print('Task', task, 'failed because of:', task.exception)

A TaskGroupError exception contains more information about what happened with the tasks. The errorsattribute is a set of exception types that took place. In this example, it would be the set { ValueError,RuntimeError }. To get more specific information, you can iterate over the exception (or look at its failedattribute). This will produce all of the tasks that failed. The task.exception attribute can be used to get specificexception information for that task.

1.3.5 Time

The following functions are used by tasks to help manage time.

await sleep(seconds)Sleep for a specified number of seconds. If the number of seconds is 0, the kernel merely switches to the nexttask (if any).

await wake_at(clock)Sleep until the monotonic clock reaches the given absolute clock value. Returns the value of the monotonicclock at the time the task awakes. Use this function if you need to have more precise interval timing.

await clock()Returns the current value of the kernel clock. This is often used in conjunction with the wake_at() function(you’d use this to get an initial clock value for passing an argument).

1.3.6 Timeouts

Any blocking operation in curio can be cancelled after a timeout. The following functions can be used for this purpose:

await timeout_after(seconds, corofunc=None, *args)Execute the specified coroutine and return its result. However, issue a cancellation request to the calling taskafter seconds have elapsed. When this happens, a curio.TaskTimeout exception is raised. If corofunc isNone, the result of this function serves as an asynchronous context manager that applies a timeout to a block ofstatements.

timeout_after() may be composed with other timeout_after() operations (i.e., nested timeouts). Ifan outer timeout expires first, then curio.TimeoutCancellationError is raised instead of curio.TaskTimeout. If an inner timeout expires and fails to properly catch curio.TaskTimeout, a curio.UncaughtTimeoutError is raised in the outer timeout.

await timeout_at(deadline, corofunc=None, *args)The same as timeout_after() except that the deadline time is given as an absolute clock time. Use theclock() function to get a base time for computing a deadline.

await ignore_after(seconds, corofunc=None, *args, timeout_result=None)Execute the specified coroutine and return its result. Issue a cancellation request after seconds have elapsed.

36 Chapter 1. Contents:

Page 41: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

When a timeout occurs, no exception is raised. Instead, None or the value of timeout_result is returned. Ifcorofunc is None, the result is an asynchronous context manager that applies a timeout to a block of statements.For the context manager case, the resulting context manager object has an expired attribute set to True iftime expired.

Note: ignore_after() may also be composed with other timeout operations. curio.TimeoutCancellationError and curio.UncaughtTimeoutError exceptions might be raised ac-cording to the same rules as for timeout_after().

await ignore_at(deadline, corofunc=None, *args)The same as ignore_after() except that the deadline time is given as an absolute clock time.

Here is an example that shows how these functions can be used:

# Execute coro(args) with a 5 second timeouttry:

result = await timeout_after(5, coro, args)except TaskTimeout as e:

result = None

# Execute multiple statements with a 5 second timeouttry:

async with timeout_after(5):await coro1(args)await coro2(args)...

except TaskTimeout as e:# Handle the timeout...

The difference between timeout_after() and ignore_after() concerns the exception handling behaviorwhen time expires. The latter function returns None instead of raising an exception which might be more convenientin certain cases. For example:

result = await ignore_after(5, coro, args)if result is None:

# Timeout occurred (if you care)...

# Execute multiple statements with a 5 second timeoutasync with ignore_after(5) as s:

await coro1(args)await coro2(args)

if s.expired:# Timeout occurred

It’s important to note that every curio operation can be cancelled by timeout. Rather than having every possible calltake an explicit timeout argument, you should wrap the call using timeout_after() or ignore_after() asappropriate.

1.3.7 Cancellation Control

disable_cancellation(corofunc=None, *args)Disables the delivery of cancellation-related exceptions to the calling task. Cancellations will be delivered tothe first blocking operation that’s performed once cancellation delivery is reenabled. This function may be usedto shield a single coroutine or used as a context manager (see example below).

1.3. Curio Reference Manual 37

Page 42: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

enable_cancellation(corofunc=None, *args)Reenables the delivery of cancellation-related exceptions. This function is used as a context manager. It mayonly be used inside a context in which cancellation has been disabled. This function may be used to shield asingle coroutine or used as a context manager (see example below).

await check_cancellation()Checks to see if any cancellation is pending for the calling task. If cancellation is allowed, a cancellationexception is raised immediately. If cancellation is not allowed, it returns the pending cancellation exceptioninstance (if any). Returns None if no cancellation is pending.

Use of these functions is highly specialized and is probably best avoided. Here is an example that shows typical usage:

async def coro():async with disable_cancellation():

while True:await coro1()await coro2()async with enable_cancellation():

await coro3() # May be cancelledawait coro4() # May be cancelled

if await check_cancellation():break # Bail out!

await blocking_op() # Cancellation (if any) delivered here

If you only need to shield a single operation, you can write statements like this:

async def coro():...await disabled_cancellation(some_operation, x, y, z)...

This is shorthand for writing the following:

async def coro():...async with disable_cancellation():

await some_operation(x, y, z)...

See the section on cancellation in the Curio Developer’s Guide for more detailed information.

1.3.8 Performing External Work

Sometimes you need to perform work outside the kernel. This includes CPU-intensive calculations and blockingoperations. Use the following functions to do that:

await curio.workers.run_in_process(callable, *args)Run callable(*args) in a separate process and returns the result. If cancelled, the underlying workerprocess (if started) is immediately cancelled by a SIGTERM signal. It is important to note that the given callableis executed in an entirely independent Python interpreter and that no shared global state should be assumed. Theseparate process is launched using the “spawn” method of the multiprocessing module.

await curio.workers.run_in_thread(callable, *args)Run callable(*args) in a separate thread and return the result. If the calling task is cancelled, the un-derlying worker thread (if started) is set aside and sent a termination request. However, since there is no un-derlying mechanism to forcefully kill threads, the thread won’t recognize the termination request until it runs

38 Chapter 1. Contents:

Page 43: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

the requested work to completion. It’s important to note that a cancellation won’t block other tasks from usingthreads. Instead, cancellation produces a kind of “zombie thread” that executes the requested work, discardsthe result, and then disappears. For reliability, work submitted to threads should have a timeout or some othermechanism that puts a bound on execution time.

await curio.workers.block_in_thread(callable, *args)The same as run_in_thread(), but guarantees that only one background thread is used for each uniquecallable regardless of how many tasks simultaneously try to carry out the same operation at once. Only use thisfunction if there is an expectation that the provided callable is going to block for an undetermined amount oftime and that there might be a large amount of contention from multiple tasks on the same resource. The primaryuse is on waiting operations involving foreign locks and queues. For example, if you launched a hundred Curiotasks and they all decided to block on a shared thread queue, using this would be much more efficient thanrun_in_thread().

await curio.workers.run_in_executor(exc, callable, *args)Run callable(*args) callable in a user-supplied executor and returns the result. exc is an executor from theconcurrent.futuresmodule in the standard library. This executor is expected to implement a submit()method that executes the given callable and returns a Future instance for collecting its result.

When performing external work, it’s almost always better to use the run_in_process() andrun_in_thread() functions instead of run_in_executor(). These functions have no external librarydependencies, have substantially less communication overhead, and more predictable cancellation semantics.

The following values in curio.workers define how many worker threads and processes are used. If you are goingto change these values, do it before any tasks are executed.

curio.workers.MAX_WORKER_THREADSSpecifies the maximum number of threads used by a single kernel using the run_in_thread() function.Default value is 64.

curio.workers.MAX_WORKER_PROCESSESSpecifies the maximum number of processes used by a single kernel using the run_in_process() function.Default value is the number of CPUs on the host system.

1.3.9 I/O Layer

I/O in curio is performed by classes in curio.io that wrap around existing sockets and streams. These classesmanage the blocking behavior and delegate their methods to an existing socket or file.

Socket

The Socket class is used to wrap existing an socket. It is compatible with sockets from the built-in socket moduleas well as SSL-wrapped sockets created by functions by the built-in ssl module. Sockets in curio should be fullycompatible most common socket features.

class curio.io.Socket(sockobj)Creates a wrapper the around an existing socket sockobj. This socket is set in non-blocking mode when wrapped.sockobj is not closed unless the created instance is explicitly closed or used as a context manager.

The following methods are redefined on Socket objects to be compatible with coroutines. Any socket method notlisted here will be delegated directly to the underlying socket. Be aware that not all methods have been wrapped andthat using a method not listed here might block the kernel or raise a BlockingIOError exception.

await Socket.recv(maxbytes, flags=0)Receive up to maxbytes of data.

1.3. Curio Reference Manual 39

Page 44: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await Socket.recv_into(buffer, nbytes=0, flags=0)Receive up to nbytes of data into a buffer object.

await Socket.recvfrom(maxsize, flags=0)Receive up to maxbytes of data. Returns a tuple (data, client_address).

await Socket.recvfrom_into(buffer, nbytes=0, flags=0)Receive up to nbytes of data into a buffer object.

await Socket.recvmsg(bufsize, ancbufsize=0, flags=0)Receive normal and ancillary data.

await Socket.recvmsg_into(buffers, ancbufsize=0, flags=0)Receive normal and ancillary data.

await Socket.send(data, flags=0)Send data. Returns the number of bytes of data actually sent (which may be less than provided in data).

await Socket.sendall(data, flags=0)Send all of the data in data. If cancelled, the bytes_sent attribute of the resulting exception contains theactual number of bytes sent.

await Socket.sendto(data, address)

await Socket.sendto(data, flags, address)Send data to the specified address.

await Socket.sendmsg(buffers, ancdata=(), flags=0, address=None)Send normal and ancillary data to the socket.

await Socket.accept()Wait for a new connection. Returns a tuple (sock, address).

await Socket.connect(address)Make a connection.

await Socket.connect_ex(address)Make a connection and return an error code instead of raising an exception.

await Socket.close()Close the connection. This method is not called on garbage collection.

await curio.io.do_handshake()Perform an SSL client handshake. The underlying socket must have already be wrapped by SSL using thecurio.ssl module.

Socket.makefile(mode, buffering=0)Make a file-like object that wraps the socket. The resulting file object is a curio.io.FileStream instancethat supports non-blocking I/O. mode specifies the file mode which must be one of 'rb' or 'wb'. bufferingspecifies the buffering behavior. By default unbuffered I/O is used. Note: It is not currently possible to create astream with Unicode text encoding/decoding applied to it so those options are not available. If you are trying toput a file-like interface on a socket, it is usually better to use the Socket.as_stream() method below.

Socket.as_stream()Wrap the socket as a stream using curio.io.SocketStream. The result is a file-like object that can beused for both reading and writing on the socket.

Socket.blocking()A context manager that temporarily places the socket into blocking mode and returns the raw socket object usedinternally. This can be used if you need to pass the socket to existing synchronous code.

Socket objects may be used as an asynchronous context manager which cause the underlying socket to be closedwhen done. For example:

40 Chapter 1. Contents:

Page 45: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async with sock:# Use the socket...

# socket closed here

FileStream

The FileStream class puts a non-blocking wrapper around an existing file-like object. Certain other functions incurio use this (e.g., the Socket.makefile() method).

class curio.io.FileStream(fileobj)Create a file-like wrapper around an existing file. fileobj must be in in binary mode. The file is placed into non-blocking mode using os.set_blocking(fileobj.fileno()). fileobj is not closed unless the resultinginstance is explicitly closed or used as a context manager.

The following methods are available on instances of FileStream:

await FileStream.read(maxbytes=-1)Read up to maxbytes of data on the file. If omitted, reads as much data as is currently available and returns it.

await FileStream.readall()Return all of the data that’s available on a file up until an EOF is read.

await FileStream.readline()Read a single line of data from a file.

await FileStream.readlines()Read all of the lines from a file. If cancelled, the lines_read attribute of the resulting exception contains allof the lines that were read so far.

await FileStream.write(bytes)Write all of the data in bytes to the file.

await FileStream.writelines(lines)Writes all of the lines in lines to the file. If cancelled, the bytes_written attribute of the exception containsthe total bytes written so far.

await FileStream.flush()Flush any unwritten data from buffers to the file.

await FileStream.close()Flush any unwritten data and close the file. This method is not called on garbage collection.

FileStream.blocking()A context manager that temporarily places the stream into blocking mode and returns the raw file object usedinternally. This can be used if you need to pass the file to existing synchronous code.

Other file methods (e.g., tell(), seek(), etc.) are available if the supplied fileobj also has them.

A FileStream may be used as an asynchronous context manager. For example:

async with stream:# Use the stream object...

# stream closed here

1.3. Curio Reference Manual 41

Page 46: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

SocketStream

The SocketStream class puts a non-blocking file-like interface around a socket. This is normally created by theSocket.as_stream() method.

class curio.io.SocketStream(sock)Create a file-like wrapper around an existing socket. sock must be a socket instance from Python’s built-insocket module. The socket is placed into non-blocking mode. sock is not closed unless the resulting instanceis explicitly closed or used as a context manager.

A SocketStream instance supports the same methods as FileStream above. One subtle issue concerns theblocking() method below.

SocketStream.blocking()A context manager that temporarily places the stream into blocking mode and returns a raw file object thatwraps the underlying socket. It is important to note that the return value of this operation is a file createdopen(sock.fileno(), 'rb+', closefd=False). You can pass this object to code that is expectingto work with a file. The file is not closed when garbage collected.

1.3.10 socket wrapper module

The curio.socket module provides a wrapper around the built-in socket module–allowing it to be used as astand-in in curio-related code. The module provides exactly the same functionality except that certain operations havebeen replaced by coroutine equivalents.

curio.socket.socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None)Creates a curio.io.Socket wrapper the around socket objects created in the built-in socket module.The arguments for construction are identical and have the same meaning. The resulting socket instance is setin non-blocking mode.

The following module-level functions have been modified so that the returned socket objects are compatible withcurio:

curio.socket.socketpair(family=AF_UNIX, type=SOCK_STREAM, proto=0)

curio.socket.fromfd(fd, family, type, proto=0)

curio.socket.create_connection(address, source_address)

The following module-level functions have been redefined as coroutines so that they don’t block the kernel wheninteracting with DNS:

await curio.socket.getaddrinfo(host, port, family=0, type=0, proto=0, flags=0)

await curio.socket.getfqdn(name)

await curio.socket.gethostbyname(hostname)

await curio.socket.gethostbyname_ex(hostname)

await curio.socket.gethostname()

await curio.socket.gethostbyaddr(ip_address)

await curio.socket.getnameinfo(sockaddr, flags)

1.3.11 ssl wrapper module

The curio.ssl module provides curio-compatible functions for creating an SSL layer around curio sockets. Thefollowing functions are redefined (and have the same calling signature as their counterparts in the standard ssl

42 Chapter 1. Contents:

Page 47: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

module:

await curio.ssl.wrap_socket(*args, **kwargs)

await curio.ssl.get_server_certificate(*args, **kwargs)

curio.ssl.create_default_context(*args, **kwargs)

class curio.ssl.SSLContextA redefined and modified variant of ssl.SSLContext so that the wrap_socket()method returns a socketcompatible with curio.

Don’t attempt to use the curio.ssl module without a careful read of Python’s official documentation at https://docs.python.org/3/library/ssl.html.

For the purposes of curio, it is usually easier to apply SSL to a connection using some of the high level networkfunctions described in the next section. For example, here’s how you make an outgoing SSL connection:

sock = await curio.open_connection('www.python.org', 443,ssl=True,server_hostname='www.python.org')

Here’s how you might define a server that uses SSL:

import curiofrom curio import ssl

KEYFILE = "privkey_rsa" # Private keyCERTFILE = "certificate.crt" # Server certificat

async def handler(client, addr):...

if __name__ == '__main__':kernel = curio.Kernel()ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)ssl_context.load_cert_chain(certfile=CERTFILE, keyfile=KEYFILE)kernel.run(curio.tcp_server('', 10000, handler, ssl=ssl_context))

1.3.12 High Level Networking

The following functions are provided to simplify common tasks related to making network connections and writingservers.

await curio.open_connection(host, port, *, ssl=None, source_addr=None, server_hostname=None,alpn_protocols=None)

Creates an outgoing connection to a server at host and port. This connection is made using the socket.create_connection() function and might be IPv4 or IPv6 depending on the network configuration (al-though you’re not supposed to worry about it). ssl specifies whether or not SSL should be used. ssl can be Trueor an instance of curio.ssl.SSLContext. source_addr specifies the source address to use on the socket.server_hostname specifies the hostname to check against when making SSL connections. It is highly advisedthat this be supplied to avoid man-in-the-middle attacks. alpn_protocols specifies a list of protocol names for usewith the TLS ALPN extension (RFC7301). A typical value might be ['h2', 'http/1.1'] for negotiatingeither a HTTP/2 or HTTP/1.1 connection.

await curio.open_unix_connection(path, *, ssl=None, server_hostname=None,alpn_protocols=None)

Creates a connection to a Unix domain socket with optional SSL applied.

1.3. Curio Reference Manual 43

Page 48: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await curio.tcp_server(host, port, client_connected_task, *, family=AF_INET, backlog=100,ssl=None, reuse_address=True, reuse_port=False)

Creates a server for receiving TCP connections on a given host and port. client_connected_task is a coroutinethat is to be called to handle each connection. Family specifies the address family and is either socket.AF_INET or socket.AF_INET6. backlog is the argument to the socket.socket.listen() method.ssl specifies an curio.ssl.SSLContext instance to use. reuse_address specifies whether to reuse a previ-ously used port. reuse_port specifies whether to use the SO_REUSEPORT socket option prior to binding.

await curio.unix_server(path, client_connected_task, *, backlog=100, ssl=None)Creates a Unix domain server on a given path. client_connected_task is a coroutine to execute on each con-nection. backlog is the argument given to the socket.socket.listen() method. ssl is an optionalcurio.ssl.SSLContext to use if setting up an SSL connection.

await curio.run_server(sock, client_connected_task, ssl=None)Runs a server on a given socket. sock is a socket already configured to receive incoming connections.client_connected_task and ssl have the same meaning as for the tcp_server() and unix_server() func-tions. If you need to perform some kind of special socket setup, not possible with the normal tcp_server()function, you can create the underlying socket yourself and then call this function to run a server on it.

curio.tcp_server_socket(host, port, family=AF_INET, backlog=100, reuse_address=True,reuse_port=False)

Creates and returns a TCP socket. Arguments are the same as for the tcp_server() function. The socket issuitable for use with other async operations as well as the run_server() function.

curio.unix_server_socket(path, backlog=100)Creates and returns a Unix socket. Arguments are the same as for the unix_server() function. The socketis suitable for use with other async operations as well as the run_server() function.

1.3.13 Message Passing and Channels

Curio provides a Channel class that can be used to perform message passing between interpreters running in separateprocesses.

class curio.channel.Channel(address, family=socket.AF_INET)Represents a communications endpoint for message passing. address is the address and family is the protocolfamily.

The following methods are used to establish a connection on a Channel instance.

await Channel.accept(*, authkey=None)Wait for an incoming connection. authkey is an optional authentication key that can be used to authenticatethe client. Authentication involves computing an HMAC-based cryptographic digest. The key itself is nottransmitted. Returns an Connection instance.

await Channel.connect(*, authkey=None)Make an outgoing connection. authkey is an optional authentication key. This method repeatedly attempts tomake a connection if the other endpoint is not responding. Returns a Connection instance.

Channel.bind()Performs the address binding step of the accept() method and returns. Can use this if you want thehost operating system to assign a port number for you. For example, you can supply an initial address of('localhost', socket.INADDR_ANY) and call bind(). Afterwards, the address attribute of theChannel instance contains the assigned address.

await Channel.close()Close the channel.

The connect() and accept() methods of Channel instances return a Connection instance.

44 Chapter 1. Contents:

Page 49: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

class curio.channel.Connection(reader, writer)Represents a connection on which message passing of Python objects is supported. reader and writer are CurioI/O streams on which reading and writing are to take place.

Instances of Connection support the following methods:

await curio.channel.close()Close the connection by closing both the reader and writer streams.

await curio.channel.recv()Receive a Python object. The received object is unserialized using the pickle module.

await curio.channel.recv_bytes(maxlength=None)Receive a raw message of bytes. maxlength specifies a maximum message size. By default messages may be ofarbitrary size.

await curio.channel.send(obj)Send a Python object. The object must be compatible with the pickle module.

await curio.channel.send_bytes(buf, offset=0, size=None)Send a buffer of bytes as a single message. offset and size specify an optional byte offset and size into theunderlying memory buffer.

await curio.channel.authenticate_server(authkey)Authenticate the connection for a server.

await curio.channel.authenticate_client(authkey)Authenticate the connection for a client.

A Connection instance may also be used as a context manager.

Here is an example of a producer program using channels:

# producer.pyfrom curio import Channel, run

async def producer(ch):c = await ch.accept(authkey=b'peekaboo')for i in range(10):

await c.send(i)await c.send(None) # Sentinel

if __name__ == '__main__':ch = Channel(('localhost', 30000))run(producer(ch))

Here is an example of a correspoinding consumer program using a channel:

# consumer.pyfrom curio import Channel, run

async def consumer(ch):c = await ch.connect(authkey=b'peekaboo')while True:

msg = await c.recv()if msg is None:

breakprint('Got:', msg)

if __name__ == '__main__':

1.3. Curio Reference Manual 45

Page 50: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

ch = Channel(('localhost', 30000))run(consumer(ch))

1.3.14 subprocess wrapper module

The curio.subprocess module provides a wrapper around the built-in subprocess module.

class curio.subprocess.Popen(*args, **kwargs)A wrapper around the subprocess.Popen class. The same arguments are accepted. On the resultingPopen instance, the stdin, stdout, and stderr file attributes have been wrapped by the curio.io.FileStream class. You can use these in an asynchronous context.

Here is an example of using Popen to read streaming output off of a subprocess with curio:

import curiofrom curio import subprocess

async def main():p = subprocess.Popen(['ping', 'www.python.org'], stdout=subprocess.PIPE)async for line in p.stdout:

print('Got:', line.decode('ascii'), end='')

if __name__ == '__main__':kernel = curio.Kernel()kernel.add_task(main())kernel.run()

The following methods of Popen have been replaced by asynchronous equivalents:

await Popen.wait()Wait for a subprocess to exit. Cancellation does not terminate the process.

await Popen.communicate(input=b”)Communicate with the subprocess, sending the specified input on standard input. Returns a tuple (stdout,stderr) with the resulting output of standard output and standard error. If cancelled, the resulting exceptionhas stdout and stderr attributes that contain the output read prior to cancellation. Cancellation does notterminate the underlying subprocess.

The following functions are also available. They accept the same arguments as their equivalents in the subprocessmodule:

await curio.subprocess.run(args, stdin=None, input=None, stdout=None, stderr=None,shell=False, check=False)

Run a command in a subprocess. Returns a subprocess.CompletedProcess instance. If cancelled, theunderlying process is terminated using the process kill()method. The resulting exception will have stdoutand stderr attributes containing output read prior to cancellation.

await curio.subprocess.check_output(args, stdout=None, stderr=None, shell=False)Run a command in a subprocess and return the resulting output. Raises a subprocess.CalledProcessError exception if an error occurred. The behavior on cancellation is the same as forrun().

1.3.15 file wrapper module

One problem concerning coroutines and async concerns access to files on the normal file system. Yes, you can use thebuilt-in open() function, but what happens afterwards is hard to predict. Internally, the operating system might have

46 Chapter 1. Contents:

Page 51: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

to access a disk drive or perform networking of its own. Either way, the operation might take a long time to completeand while it does, the whole Curio kernel will be blocked. You really don’t want that–especially if the system is underheavy load.

The curio.file module provides an asynchronous compatible replacement for the built-in open() function andassociated file objects, should you want to read and write traditional files on the filesystem. The underlying implemen-tation avoids blocking. How this is accomplished is an implementation detail (although threads are used in the initialversion).

curio.file.aopen(*args, **kwargs)Creates a curio.file.AsyncFile wrapper around a traditional file object as returned by Python’s builtinopen() function. The arguments are exactly the same as for open(). The returned file object must be usedas an asynchronous context manager.

class curio.file.AsyncFile(fileobj)This class represents an asynchronous file as returned by the aopen() function. Normally, instances are createdby the aopen() function. However, it can be wrapped around an already-existing file object that was openedusing the built-in open() function.

The following methods are redefined on AsyncFile objects to be compatible with coroutines. Any method not listedhere will be delegated directly to the underlying file. These methods take the same arguments as the underlying fileobject. Be aware that not all of these methods are available on all kinds of files (e.g., read1(), readinto() andsimilar methods are only available in binary-mode files).

await AsyncFile.read(*args, **kwargs)

await AsyncFile.read1(*args, **kwargs)

await AsyncFile.readline(*args, **kwargs)

await AsyncFile.readlines(*args, **kwargs)

await AsyncFile.readinto(*args, **kwargs)

await AsyncFile.readinto1(*args, **kwargs)

await AsyncFile.write(*args, **kwargs)

await AsyncFile.writelines(*args, **kwargs)

await AsyncFile.truncate(*args, **kwargs)

await AsyncFile.seek(*args, **kwargs)

await AsyncFile.tell(*args, **kwargs)

await AsyncFile.flush()

await AsyncFile.close()

AsyncFile objects should always be used as an asynchronous context manager. For example:

async with aopen(filename) as f:# Use the filedata = await f.read()

AsyncFile objects may also be used with asynchronous iteration. For example:

async with open(filename) as f:async for line in f:

...

AsyncFile objects are intentionally incompatible with code that uses files in a synchronous manner. Partly, this isto help avoid unintentional errors in your program where blocking might occur with you realizing it. If you know what

1.3. Curio Reference Manual 47

Page 52: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

you’re doing and you need to access the underlying file in synchronous code, use the blocking() context manager likethis:

async with open(filename) as f:...# Pass to synchronous code (danger: might block)with f.blocking() as sync_f:

# Use synchronous I/O operationsdata = sync_f.read()...

1.3.16 Synchronization Primitives

The following synchronization primitives are available. Their behavior is similar to their equivalents in thethreading module. None of these primitives are safe to use with threads created by the built-in threadingmodule.

class EventAn event object.

Event instances support the following methods:

Event.is_set()Return True if the event is set.

Event.clear()Clear the event.

await Event.wait()Wait for the event.

await Event.set()Set the event. Wake all waiting tasks (if any).

Here is an Event example:

import curio

async def waiter(evt):print('Waiting')await evt.wait()print('Running')

async def main():evt = curio.Event()# Create a few waitersawait curio.spawn(waiter(evt))await curio.spawn(waiter(evt))await curio.spawn(waiter(evt))

await curio.sleep(5)

# Set the event. All waiters should wake upawait evt.set()

curio.run(main)

class LockThis class provides a mutex lock. It can only be used in tasks. It is not thread safe.

48 Chapter 1. Contents:

Page 53: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Lock instances support the following methods:

await Lock.acquire()Acquire the lock.

await Lock.release()Release the lock.

Lock.locked()Return True if the lock is currently held.

The preferred way to use a Lock is as an asynchronous context manager. For example:

import curio

async def child(lck):async with lck:

print('Child has the lock')

async def main():lck = curio.Lock()async with lck:

print('Parent has the lock')await curio.spawn(child(lck))await curio.sleep(5)

curio.run(main())

Note that due to the asynchronous nature of the context manager, the lock could be acquired by another waiter beforethe current task executes the first line after the context, which might surprise a user:

lck = Lock()async def foo():

async with lck:print('locked')# since the actual call to lck.release() will be done before# exiting the context, some other waiter coroutine could be# scheduled to run before we actually exit the context

print('This line might be executed after''another coroutine acquires this lock')

class RLockThis class provides a recursive lock funtionality, that could be acquired multiple times within the same task. Thebehavior of this lock is identical to the threading.RLock, except that the owner of the lock will be a task,wich acquired it, instead of a thread.

RLock instances support the following methods:

await RLock.acquire()Acquire the lock, incrementing the recursion by 1. Can be used multiple times within the same task, that ownsthis lock.

await RLock.release()Release the lock, decrementing the recursion level by 1. If recursion level reaches 0, the lock is unlocked. RaisesRuntimeError if called not by the owner or if lock is not locked.

RLock.locked()Return True if the lock is currently held, i.e. recursion level is greater than 0.

class Semaphore(value=1)Create a semaphore. Semaphores are based on a counter. If the count is greater than 0, it is decremented and the

1.3. Curio Reference Manual 49

Page 54: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

semaphore is acquired. Otherwise, the task has to wait until the count is incremented by another task.

class BoundedSemaphore(value=1)This class is the same as Semaphore except that the semaphore value is not allowed to exceed the initial value.

Semaphores support the following methods:

await Semaphore.acquire()Acquire the semaphore, decrementing its count. Blocks if the count is 0.

await Semaphore.release()Release the semaphore, incrementing its count. Never blocks.

Semaphore.locked()Return True if the Semaphore is locked.

Semaphore.valueA read-only property giving the current value of the semaphore.

BoundedSemaphore.boundA read-only property giving the upper bound of a bounded semaphore.

Like locks, semaphores support the async-with statement. A common use of semaphores is to limit the number oftasks performing an operation. For example:

import curio

async def worker(sema):async with sema:

print('Working')await curio.sleep(5)

async def main():sema = curio.Semaphore(2) # Allow two tasks at a time

# Launch a bunch of tasksfor n in range(10):

await curio.spawn(worker(sema))

# After this point, you should see two tasks at a time run. Every 5 seconds.

curio.run(main())

class Condition(lock=None)Condition variable. lock is the underlying lock to use. If none is provided, then a Lock object is used.

Condition objects support the following methods:

Condition.locked()Return True if the condition variable is locked.

await Condition.acquire()Acquire the condition variable lock.

await Condition.release()Release the condition variable lock.

await Condition.wait()Wait on the condition variable. This releases the underlying lock.

await Condition.wait_for(predicate)Wait on the condition variable until a supplied predicate function returns True. predicate is a callable that takesno arguments.

50 Chapter 1. Contents:

Page 55: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await notify(n=1)Notify one or more tasks, causing them to wake from the Condition.wait() method.

await notify_all()Notify all tasks waiting on the condition.

Condition variables are often used to signal between tasks. For example, here is a simple producer-consumer scenario:

import curiofrom collections import deque

items = deque()async def consumer(cond):

while True:async with cond:

while not items:await cond.wait() # Wait for items

item = items.popleft()print('Got', item)

async def producer(cond):for n in range(10):

async with cond:items.append(n)await cond.notify()

await curio.sleep(1)

async def main():cond = curio.Condition()await curio.spawn(producer(cond))await curio.spawn(consumer(cond))

curio.run(main())

1.3.17 Queues

If you want to communicate between tasks, it’s usually much easier to use a Queue instead.

class Queue(maxsize=0)Creates a queue with a maximum number of elements in maxsize. If not specified, the queue can hold anunlimited number of items.

A Queue instance supports the following methods:

Queue.empty()Returns True if the queue is empty.

Queue.full()Returns True if the queue is full.

Queue.qsize()Return the number of items currently in the queue.

await Queue.get()Returns an item from the queue.

await Queue.put(item)Puts an item on the queue.

1.3. Curio Reference Manual 51

Page 56: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await Queue.join()Wait for all of the elements put onto a queue to be processed. Consumers must call Queue.task_done() toindicate completion.

await Queue.task_done()Indicate that processing has finished for an item. If all items have been processed and there are tasks waiting onQueue.join() they will be awakened.

Here is an example of using queues in a producer-consumer problem:

import curio

async def producer(queue):for n in range(10):

await queue.put(n)await queue.join()print('Producer done')

async def consumer(queue):while True:

item = await queue.get()print('Consumer got', item)await queue.task_done()

async def main():q = curio.Queue()prod_task = await curio.spawn(producer(q))cons_task = await curio.spawn(consumer(q))await prod_task.join()await cons_task.cancel()

curio.run(main())

class PriorityQueue(maxsize=0)Creates a priority queue with a maximum number of elements in maxsize.

In a PriorityQueue items are retrieved in priority order with the lowest priority first:

import curio

async def main():q = curio.PriorityQueue()await q.put((0, 'highest priority'))await q.put((100, 'very low priority'))await q.put((3, 'higher priority'))

while not q.empty():print(await q.get())

curio.run(main())

This will output

(0, 'highest priority')(3, 'higher priority')(100, 'very low priority')

class LifoQueue(maxsize=0)A queue with “Last In First Out” retrieving policy

52 Chapter 1. Contents:

Page 57: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

import curio

async def main():q = curio.LifoQueue()await q.put('first')await q.put('second')await q.put('last')

while not q.empty():print(await q.get())

curio.run(main())

This will output

lastsecondfirst

Here is an example a producer-consumer problem with a UniversalQueue:

from curio import run, UniversalQueue, spawn, run_in_thread

import timeimport threading

# An async taskasync def consumer(q):

print('Consumer starting')while True:

item = await q.get()if item is None:

breakprint('Got:', item)await q.task_done()

print('Consumer done')

# A threaded producerdef producer(q):

for i in range(10):q.put(i)time.sleep(1)

q.join()print('Producer done')

async def main():q = UniversalQueue()t1 = await spawn(consumer(q))t2 = threading.Thread(target=producer, args=(q,))t2.start()await run_in_thread(t2.join)await q.put(None)await t1.join()await q.shutdown()

run(main())

In this code, the consumer() is a Curio task and producer() is a thread.

1.3. Curio Reference Manual 53

Page 58: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

If the withfd=True option is given to a UniveralQueue, it additionally has a fileno() method that can bepassed to various functions that might poll for I/O events. When enabled, putting something in the queue will alsowrite a byte of I/O. This might be useful if trying to pass data from Curio to a foreign event loop.

1.3.18 Synchronizing with Threads and Processes

Curio’s synchronization primitives aren’t safe to use with externel threads or processes. However, Curio can workwith existing thread or process-level synchronization primitives if you use the abide() function.

await abide(op, *args, reserve_thread=False)Execute an operation in a manner that safely works with async code. If op is a coroutine function, thenop(*args) is returned. If op is a synchronous function, then block_in_thread(op, *args) is re-turned. In both cases, you would use await to obtain the result. If op is an asynchronous context manager,it is returned unmodified. If op is a synchronous context manager, it is wrapped in a manner that carries outits execution in a backing thread. For this latter case, a special keyword argument reserve_thread=Truemay be given that instructs Curio to use the same backing thread for the entire duraction of the context manager.

The main use of this function is in code that wants to safely synchronize curio with threads and processes. For example,here is how you would synchronize a thread with a curio task using a threading lock:

import curioimport threadingimport time

# A curio taskasync def child(lock):

async with curio.abide(lock):print('Child has the lock')

# A threaddef parent(lock):

with lock:print('Parent has the lock')time.sleep(5)

lock = threading.Lock()threading.Thread(target=parent, args=(lock,)).start()curio.run(child(lock))

If you wanted to trigger or wait for a thread-event, you might do this:

import curioimport threading

evt = threading.Event()

async def worker():await abide(evt.wait)print('Working')...

def main():...evt.set()...

54 Chapter 1. Contents:

Page 59: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

For condition variables and reentrant locks, you should use reserve_thread=True keyword argument to makesure the same thread is used throughout the block. For example:

import curioimport threadingimport collections

# A threaddef producer(cond, items):

for n in range(10):with cond:

items.append(n)cond.notify()

print('Producer done')

# A curio taskasync def consumer(cond, items):

while True:async with abide(cond, reserve_thread=True) as c:

while not items:await c.wait()

item = items.popleft()if item is None:

breakprint('Consumer got:', item)

print('Consumer done')

cond = threading.Condition()items = collections.deque()

threading.Thread(target=producer, args=(cond, items)).start()curio.run(consumer(cond, items))

A notable feature of abide() is that it also accepts Curio’s own synchronization primitives. Thus, you can writecode that works independently of the lock type. For example, the first locking example could be rewritten as followsand the child would still work:

import curio

# A curio task (works with any lock)async def child(lock):

async with curio.abide(lock):print('Child has the lock')

# Another curio taskasync def main():

lock = curio.Lock()async with lock:

print('Parent has the lock')await curio.spawn(child(lock))await curio.sleep(5)

curio.run(main())

A special circle of hell awaits code that combines the use of the abide() function with task cancellation. Althoughcancellation is mostly supported, there are a few things to keep in mind about it. First, if you are using abide(func,arg1, arg2, ...) to run a synchronous function, that function will fully run to completion in a separate threadregardless of the cancellation. So, if there are any side-effects associated with that code executing, you’ll need to take

1.3. Curio Reference Manual 55

Page 60: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

them into account. Second, if you are using async with abide(lock) with a thread-lock and a cancellationrequest is received while waiting for the lock.__enter__() method to execute, a background thread continuesto run waiting for the eventual lock acquisition. Once acquired, curio releases it again. However, fully figuring outwhat’s happening might be mind-bending.

The abide() function can be used to synchronize with a thread reentrant lock (e.g., threading.RLock). How-ever, reentrancy is not supported. Each lock acquisition using abide() involves a different backing thread. Repeatedacquisitions would violate the constraint that reentrant locks have on only being acquired by a single thread.

All things considered, it’s probably best to try and avoid code that synchronizes Curio tasks with threads. However, ifyou must, Curio abides.

1.3.19 Asynchronous Threads

If you need to perform a lot of synchronous operations, but still interact with Curio, you might consider launchingan asynchronous thread. An asynchronous thread flips the whole world around–instead of executing synchronousoperations using run_in_thread(), you kick everything out to a thread and selectively perform the asynchronousoperations using a magic AWAIT() function.

class AsyncThread(target, args=(), kwargs={}, daemon=True)Creates an asynchronous thread. The arguments are the same as for the threading.Thread class. targetis a synchronous callable. args and kwargs are its arguments. daemon specifies if the thread runs indaemonic mode.

await AsyncThread.start()Starts the asynchronous thread.

await join()Waits for the thread to terminate, returning the callables final result. The final result is returned in the samemanner as the usual Task.join() method used on Curio tasks.

await wait()Waits for the thread to terminate, but do not result any result.

AsyncThread.resultThe result of the thread, if completed. If accessed before the thread terminates, a RuntimeError exception israised. If the task crashed with an exception, that exception is reraised on access.

await cancel()Cancels the asynchronous thread. The behavior is the same as cancellation performed on Curio tasks. Note: Anasynchronous thread can only be cancelled when it performs blocking operations on asynchronous objects (e.g.,using AWAIT().

As a shortcut for creating an asynchronous thread, you can use spawn_thread() instead.

await spawn_thread(func=None, *args, daemon=False)Launch an asynchronous thread that runs the callable func(*args). daemon specifies if the thread runs indaemonic mode. This function may also be used as a context manager if func is None. In that case, the bodyof the context manager executes in a separate thread. For the context manager case, the body is not allowed toperform any asynchronous operation involving async or await. However, the AWAIT() function may beused to delegate asynchronous operations back to Curio’s main thread.

Within a thread, the following function can be used to execute a coroutine.

AWAIT(coro)Execute a coroutine on behalf of an asynchronous thread. The requested coroutine always executes in Curio’smain execution thread. The caller is blocked until it completes. If used outside of an asynchronous thread,an AsyncOnlyError exception is raised. If coro is not a coroutine, it is returned unmodified. The reason

56 Chapter 1. Contents:

Page 61: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

AWAIT is all-caps is to make it more easily heard when there are all of these coders yelling at you to just usepure async code instead of launching a thread. Also, await is likely to be a reserved keyword in Python 3.7.

Here is a simple example of an asynchronous thread that reads data off a Curio queue:

from curio import run, Queue, sleep, CancelledErrorfrom curio.thread import spawn_thread, AWAIT

def consumer(queue):try:

while True:item = AWAIT(queue.get())print('Got:', item)AWAIT(queue.task_done())

except CancelledError:print('Consumer goodbye!')raise

async def main():q = Queue()t = await spawn_thread(consumer, q)

for i in range(10):await q.put(i)await sleep(1)

await q.join()await t.cancel()

run(main())

Asynchronous threads can also be created using the following decorator.

async_thread(callable)A decorator that adapts a synchronous callable into an asynchronous function that runs an asynchronous thread.

Using this decorator, you can write a function like this:

@async_threaddef consumer(queue):

try:while True:

item = AWAIT(queue.get())if item is None:

breakprint('Got:', item)AWAIT(queue.task_done())

except CancelledError:print('Consumer goodbye!')raise

Now, whenever the code executes (e.g., await consumer(q)), a thread will automatically be created. One amaz-ing thing about such functions is that they can still be used in traditional synchronous code. For example, you coulduse the above consumer function with normal threaded code:

import threadingimport queue

1.3. Curio Reference Manual 57

Page 62: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

def producer(queue):for i in range(10):

queue.put(i)queue.put(None)

def main():q = queue.Queue()t1 = threading.Thread(target=consumer, args=(q,))t1.start()producer(q)t1.join()

main()

Asynchronous threads can use all of Curio’s features including coroutines, asynchronous context managers, asyn-chronous iterators, timeouts and more. For coroutines, use the AWAIT() function. For context managers and iterators,use the synchronous counterpart. For example, you could write this:

from curio.thread import async_thread, AWAITfrom curio import run, tcp_server

@async_threaddef echo_client(client, addr):

print('Connection from:', addr)with client:

f = client.as_stream()for line in f:

AWAIT(client.sendall(line))print('Client goodbye')

run(tcp_server('', 25000, echo_client))

In this code, the with client and for line in f statements are actually executing asynchronous code behindthe scenes.

Asynchronous threads can perform any combination of blocking operations including those that might involve normalthread-related primitives such as locks and queues. These operations will block the thread itself, but will not blockthe Curio kernel loop. In a sense, this is the whole point–if you run things in an async threads, the rest of Curio isprotected. Asynchronous threads can be cancelled in the same manner as normal Curio tasks. However, the same rulesapply–an asynchronous thread can only be cancelled on blocking operations involving AWAIT().

A final curious thing about async threads is that the AWAIT() function is no-op if you don’t give it a coroutine. Thismeans that code, in many cases, can be made to be compatible with regular Python threads. For example, this codeinvolving normal threads actually runs:

from curio.thread import AWAITfrom curio import CancelledErrorfrom threading import Threadfrom queue import Queuefrom time import sleep

def consumer(queue):try:

while True:item = AWAIT(queue.get())print('Got:', item)AWAIT(queue.task_done())

58 Chapter 1. Contents:

Page 63: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

except CancelledError:print('Consumer goodbye!')raise

def main():q = Queue()t = Thread(target=consumer, args=(q,), daemon=True)t.start()

for i in range(10):q.put(i)sleep(1)

q.join()

main()

In this code, consumer() is simply launched in a regular thread with a regular thread queue. The AWAIT() oper-ations do nothing–the queue operations aren’t coroutines and their results return unmodified. Certain Curio featuressuch as cancellation aren’t supported by normal threads so that would be ignored. However, it’s interesting that youcan write a kind of hybrid code that works in both a threaded and asynchronous world.

1.3.20 Signals

One way to manage Unix signals is to use the SignalQueue class. This class operates as a queue, but you use itwith an asynchronous context manager to enable the delivery of signals. The usage looks like this:

import signal

async def coro():...async with SignalQueue(signal.SIGUSR1, signal.SIGHUP) as sig_q:

...signo = await sig_q.get()print('Got signal', signo)...

For all of the statements inside the context-manager, signals will be queued in the background. The sig_q.get()operation will return received signals one at a time from the queue. Even though this queue contains signals as theywere received by Python, be aware that “reliable signaling” is not guaranteed. Python only runs signal handlersperiodically in the background and multiple signals might be collapsed into a single signal delivery.

Another way to receive signals is to use the SignalEvent class. This is particularly useful for one-time signals suchas the keyboard interrupt or SIGTERM signal. Here’s an example of how you might use a signal event to shutdown atask:

Goodbye = SignalEvent(signal.SIGINT, signal.SIGTERM)

async def child():while True:

print('Spinning')await sleep(1)

async def coro():task = await spawn(child)await Goodbye.wait()

1.3. Curio Reference Manual 59

Page 64: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print('Got signal. Goodbye')await task.cancel()

class SignalQueue(*signals)Create a queue for receiving signals. signals is one or more signals as defined in the built-in signal module.A SignalQueue is a proper queue. Use the get() method to receive a signal. Other queue methods canbe used as well. For example, you can call put() to manually put a signal number on the queue if you want(possibly useful in testing). The queue must be used as an asynchronous-context manager for signal delivery toenabled.

class SignalEvent(*signals)Create an event that allows signal waiting. Use the wait() method to wait for arrival. This is a proper Eventobject. You can use other methods such as set() or is_set().

The following functions are also defined for signal management:

.. function::ignore_signals(signals)

Return a context manager in which signals are ignored. signals is a set of signal numbers from thesignal module. This function may only be called from Python’s main execution thread. Note thatsignals are not delivered asynchronous to Curio via callbacks (they only come via queues or events).Because of this, it’s rarely necessary to mask signals. You may be better off blocking cancellation withthe disable_cancellation() function instead.

These last two functions are mainly intended for use in setting up the runtime environment for Curio. For example, ifyou needed to run Curio in a separate thread and your code involved signal handling, you’d need to do this:

import threadingimport curioimport signal

allowed_signals = { signal.SIGINT, signal.SIGTERM, signal.SIGUSR1 }

async def main():...

if __name__ == '__main__':with curio.enable_signals(allowed_signals):

t = threading.Thread(target=curio.run, args=(main,))t.start()...t.join()

Again, keep in mind you don’t need to do this is Curio is running in the main thread. Running in a separate thread ismore of a special case.

1.3.21 Scheduler Activations

Each task in Curio goes through a life-cycle of creation, running, suspension, and eventual termination. These can bemonitored by external tools by defining classes that inherit from Activation.

class curio.activation.ActivationBase class for defining scheduler activations.

The following methods are executed as callback-functions by the kernel:

curio.activation.activate(kernel)Executed once upon initialization of the Curio kernel. kernel is a reference to the Kernel instance.

60 Chapter 1. Contents:

Page 65: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

curio.activation.created(task)Called when a new task is created. task is the newly created Task instance.

curio.activation.running(task)Called immediately prior to the execution of a task.

curio.activation.suspended(task)Called when a task has suspended execution.

curio.activation.terminated(task)Called when a task has terminated execution. Note: the suspended() method is always called prior to a taskbeing terminated.

As an example, here is a scheduler activation that monitors for long-execution times and reports warnings:

from curio.activation import Activationimport time

class LongBlock(Activation):def __init__(self, maxtime):

self.maxtime = maxtime

def running(self, task):self.start = time.time()

def suspended(self, task):end = time.time()if end - self.start > self.maxtime:

print(f'Long blocking in {task.name}: {end - self.start}')

Scheduler activations are registered when a Kernel is created or with the top-level run() function:

kern = Kernel(activations=[LongBlock(0.05)])with kern:

kern.run(coro)

# Alternativerun(activations=[LongBlock(0.05)])

1.3.22 Asynchronous Metaprogramming

The curio.meta module provides some decorators and metaclasses that might be useful if writing larger programsinvolving coroutines.

class curio.meta.AsyncABCA base class that provides the functionality of a normal abstract base class, but additionally enforces coroutine-correctness on methods in subclasses. That is, if a method is defined as a coroutine in a parent class, then it mustalso be a coroutine in child classes.

Here is an example:

from curio.abc import AsyncABC, abstractmethod

class Base(AsyncABC):@abstractmethodasync def spam(self):

pass

1.3. Curio Reference Manual 61

Page 66: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

@abstractmethodasync def grok(self):

pass

class Child(Base):async def spam(self):

pass

c = Child() # Error -> grok() not defined

class Child2(Base):def spam(self): # Error -> Not defined using async def

pass

async def grok(self):pass

The enforcement of coroutines is applied to all methods. Thus, the following classes would also generate an error:

class Base(AsyncABC):async def spam(self):

pass

async def grok(self):pass

class Child(Base):def spam(self): # Error -> Not defined using async def

pass

class curio.meta.AsyncObjectA base class that provides all of the functionality of AsyncABC, but additionally requires instances to be createdinside of coroutines. The __init__() method must be defined as a coroutine and may call other coroutines.

Here is an example using AsyncObject:

from curio.meta import AsyncObject

class Spam(AsyncObject):async def __init__(self, x, y):

self.x = xself.y = y

# To create an instanceasync def func():

s = await Spam(2, 3)...

curio.meta.blocking(func)A decorator that indicates that the function performs a blocking operation. If the function is called from withina coroutine, the function is executed in a separate thread and await is used to obtain the result. If the functionis called from normal synchronous code, then the function executes normally. The Curio run_in_thread()coroutine is used to execute the function in a thread.

curio.meta.cpubound(func)A decorator that indicates that the function performs CPU intensive work. If the function is called from within acoroutine, the function is executed in a separate process and await is used to obtain the result. If the function iscalled from normal synchronous code, then the function executes normally. The Curio run_in_process()

62 Chapter 1. Contents:

Page 67: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

coroutine is used to execute the function in a process.

The @blocking and @cpubound decorators are interesting in that they make normal Python functions usable fromboth asynchronous and synchronous code. For example, consider this example:

import curiofrom curio.meta import blockingimport time

@blockingdef slow(name):

time.sleep(30)return 'Hello ' + name

async def main():result = await slow('Dave') # Async executionprint(result)

if __name__ == '__main__':result = slow('Guido') # Sync executionprint(result)curio.run(main())

In this example, the slow() function can be used from both coroutines and normal synchronous code. However,when called in a coroutine, await must be used. Behind the scenes, the function runs in a thread–preventing thefunction from blocking the execution of other coroutines.

curio.meta.awaitable(syncfunc)A decorator that allows an asynchronous implementation of a function to be attached to an existing synchronousfunction. If the resulting function is called from synchronous code, the synchronous function is used. If thefunction is called from asynchronous code, the asynchronous function is used.

Here is an example that illustrates:

import curiofrom curio.meta import awaitable

def spam(x, y):print('Synchronous ->', x, y)

@awaitable(spam)async def spam(x, y):

print('Asynchronous ->', x, y)

async def main():await spam(2, 3) # Calls asynchronous spam()

if __name__ == '__main__':spam(2, 3) # Calls synchronous spam()curio.run(main())

1.3.23 Exceptions

The following exceptions are defined. All are subclasses of the CurioError base class.

exception curio.CurioErrorBase class for all Curio-specific exceptions.

1.3. Curio Reference Manual 63

Page 68: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

exception curio.CancelledErrorBase class for all cancellation-related exceptions.

exception curio.TaskCancelledException raised in a coroutine if it has been cancelled using the Task.cancel() method. If ignored, thecoroutine is silently terminated. If caught, a coroutine can continue to run, but should work to terminate execu-tion. Ignoring a cancellation request and continuing to execute will likely cause some other task to hang.

exception curio.TaskTimeoutException raised in a coroutine if it has been cancelled by timeout.

exception curio.TimeoutCancellationErrorException raised in a coroutine if it has been cancelled due to a timeout, but not one related to the inner-mosttimeout operation.

exception curio.TaskErrorException raised by the Task.join() method if an uncaught exception occurs in a task. It is a chainedexception. The __cause__ attribute contains the exception that causes the task to fail.

1.3.24 Low-level Kernel System Calls

The following system calls are available, but not typically used directly in user code. They are used to implementhigher level objects such as locks, socket wrappers, and so forth. If you find yourself using these, you’re probably doingsomething wrong–or implementing a new curio primitive. These calls are found in the curio.traps submodule.

Traps come in two flavors: blocking and synchronous. A blocking trap might block for an indefinite period of timewhile allowing other tasks to run, and always checks for and raises any pending timeouts or cancellations. A syn-chronous trap is implemented by trapping into the kernel, but semantically it acts like a regular synchronous functioncall. Specifically, this means that it always returns immediately without running any other task, and that it does not actas a cancellation point.

await curio.traps._read_wait(fileobj)Blocking trap. Sleep until data is available for reading on fileobj. fileobj is any file-like object with a fileno()method.

await curio.traps._write_wait(fileobj)Blocking trap. Sleep until data can be written on fileobj. fileobj is any file-like object with a fileno() method.

await curio.traps._future_wait(future)Blocking trap. Sleep until a result is set on future. future is an instance of concurrent.futures.Future.

await curio.traps._cancel_task(task)Synchronous trap. Cancel the indicated task.

await curio.traps._scheduler_wait(sched, state_name)Blocking trap. Go to sleep on a kernel scheduler primitive. sched is an instance of curio.sched.SchedBase. state_name is the name of the wait state (used in debugging).

await curio.traps._scheduler_wake(sched, n=1, value=None, exc=None)Synchronous trap. Reschedule one or more tasks from a kernel scheduler primitive. n is the number of tasks torelease. value and exc specify the return value or exception to raise in the task when it resumes execution.

await curio.traps._get_kernel()Synchronous trap. Get a reference to the running Kernel object.

await curio.traps._get_current()Synchronous trap. Get a reference to the currently running Task instance.

await curio.traps._set_timeout(seconds)Synchronous trap. Set a timeout in the currently running task. Returns the previous timeout (if any)

64 Chapter 1. Contents:

Page 69: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await curio.traps._unset_timeout(previous)Synchronous trap. Unset a timeout in the currently running task. previous is the value returned by the_set_timeout() call used to set the timeout.

_clock():Synchronous trap. Returns the current time according to the Curio kernel’s clock.

Again, you’re unlikely to use any of these functions directly. However, here’s a small taste of how they’re used. Forexample, the curio.io.Socket.recv() method looks roughly like this:

class Socket(object):...def recv(self, maxbytes):

while True:try:

return self._socket.recv(maxbytes)except BlockingIOError:

await _read_wait(self._socket)...

This method first tries to receive data. If none is available, the _read_wait() call is used to put the task to sleep untilreading can be performed. When it awakes, the receive operation is retried. Just to emphasize, the _read_wait()doesn’t actually perform any I/O. It’s just scheduling a task for it.

1.3.25 Debugging and Diagnostics

Curio provides a few facilities for basic debugging and diagnostics. If you print a Task instance, it will tell you thename of the associated coroutine along with the current file/linenumber of where the task is currently executing. Theoutput might look similar to this:

Task(id=3, name='child', state='TIME_SLEEP') at filename.py:9

You can additionally use the Task.traceback() method to create a current stack traceback of any given task. Forexample:

t = await spawn(coro)...print(t.traceback())

Instead of a full traceback, you can also get the current filename and line number:

filename, lineno = await t.where()

To find out more detailed information about what the kernel is doing, you can supply one or more debugging modulesto the run() function. To trace all task scheduling events, use the schedtrace debugger as follows:

from curio.debug import schedtracerun(coro, debug=schedtrace)

To trace all low-level kernel traps, use the traptrace debugger:

from curio.debug import traptracerun(coro, debug=traptrace)

To report all exceptions from crashed tasks, use the logcrash debugger:

1.3. Curio Reference Manual 65

Page 70: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

from curio.debug import logcrashrun(coro, debug=logcrash)

To report warnings about long-running tasks that appear to be stalling the event loop, use the longblock debugger:

from curio.debug import longblockrun(coro, debug=longblock(max_time=0.1))

The different debuggers may be combined together if you provide a list. For example:

run(coro, debug=[schedtrace, traptrace, logcrash])

The amount of output produced by the different debugging modules might be considerable. You can filter it to aspecific set of coroutine names using the filter keyword argument. For example:

async def spam():...

async def coro():t = await spawn(spam)...

run(coro, debug=schedtrace(filter={'spam'}))

The logging level used by the different debuggers can be changed using the level keyword argument:

run(coro, debug=schedtrace(level=logging.DEBUG))

A different Logger instance can be used using the log keyword argument:

import loggingrun(coro, debug=schedtrace(log=logging.getLogger('spam')))

Be aware that all diagnostic logging is synchronous. As such, all logging operations might temporarily block the eventloop–especially if logging output involves file I/O or network operations. If this is a concern, you should take steps tomitigate it in the configuration of logging. For example, you might use the QueueHandler and QueueListenerobjects from the logging module to offload log handling to a separate thread.

1.4 Developing with Curio

(This is a work in progress)

So, you want to write a larger application or library that depends on Curio? This document describes the overallphilosophy behind Curio, how it works internally, and how you might approach software development using it.

1.4.1 Please, Don’t Use Curio!

Let’s be frank for a moment–you really don’t want to use Curio. All things equal, you should probably be programmingwith threads. Yes, threads. THOSE threads. Seriously. I’m not kidding.

“But what about the GIL?” you ask. Yes, yes, that can sometimes be an issue.

“Or what about the fact that no one is smart enough to program with threads?” Okay, yes, a lot of computer sciencestudents have exploded their head trying to solve something like the “Sleeping Barber” problem on their OperatingSystems final exam. Yes, it can get tricky sometimes.

66 Chapter 1. Contents:

Page 71: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

“And what about making everything web-scale?” Yes, threads might not let you run the next Facebook on a singleserver instance. Point taken.

All of these are perfectly valid concerns. However, the truth of the matter is that threads still actually work pretty wellfor a lot of problems–most problems really. For one, it is extremely unlikely that you’re building the next Facebook. Ifall you need to do is serve a few hundred clients at once, threads will work fine for that. Second, there are well-knownways to make thread programming sane. For example, using functions, avoiding shared state and side effects, andcoordinating threads with queues. As for the dreaded GIL, that is mainly a concern for CPU-intensive processing.Although it’s an annoyance, there are known ways to work around it using process pools, message passing, or Cextensions. Finally, threads have the benefit of working with almost any existing Python code. All of the popularpackages (e.g., requests, SQLAlchemy, Django, Flask, etc.) work fine with threads. I use threads in production.There, I’ve said it.

Now, suppose that you’ve ignored this advice or that you really do need to write an application that can handle 10000concurrent client connections. In that case, a coroutine-based library like Curio might be able to help you. Beforebeginning though, be aware that coroutines are part of a strange new world. They execute differently than normalPython code and don’t play well with existing libraries. Nor do they solve the problem of the GIL or give youincreased parallelism. In addition to seeing new kinds of bugs, coroutines will likely make you swat your arms in theair as you fight swarms of stinging bats and swooping manta rays. Your coworkers will keep their distance more thanusual. Coroutines are weird, finicky, fun, and amazing (sometimes all at once). Only you can decide if this is whatyou really want.

Curio makes it all just a bit more interesting by killing off every beloved character of asynchronous programming inthe first act. The event loop? Dead. Futures? Dead. Protocols? Dead. Transports? You guessed it, dead. And thescrappy hero, Callback “Buck” Function? Yep, dead. Big time dead–as in not just “pining for the fjords” dead. Triedto apply a monkeypatch. It failed. Now, when Curio goes to the playlot and asks “who wants to interoperate?”, theother kids are quickly shuttled away by their fretful parents.

And a hollow voice says “plugh.”

Say, have you considered using threads? Or almost anything else?

1.4.2 Coroutines

First things, first. Curio is solely focused on solving one specific problem–and that’s the concurrent execution andscheduling of coroutines. This section covers some coroutine basics and takes you into the heart of why they’re usedfor concurrency.

Defining a Coroutine

A coroutine is a function defined using async def such as this:

>>> async def greeting(name):... return 'Hello ' + name

Unlike a normal function, a coroutine never executes independently. It has to be driven by some other code. It’slow-level, but you can drive a coroutine manually if you want:

>>> g = greeting('Dave')>>> g<coroutine object greeting at 0x10978ee60>>>> g.send(None)Traceback (most recent call last):

File "<stdin>", line 1, in <module>StopIteration: Hello Dave>>>

1.4. Developing with Curio 67

Page 72: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Normally, you wouldn’t do this though. Curio provides a high-level function that runs a coroutine and returns its finalresult:

>>> from curio import run>>> run(greeting, 'Dave')'Hello Dave'>>>

By the way, run() is basically the only function Curio provides to the outside world of non-coroutines. Rememberthat. It’s “run”. Three letters.

Coroutines Calling Coroutines

Coroutines can call other coroutines as long as you preface the call with the await keyword. For example:

>>> async def main():... names = ['Dave', 'Paula', 'Thomas', 'Lewis']... for name in names:... print(await greeting(name))>>> from curio import run>>> run(main)Hello DaveHello PaulaHello ThomasHello Lewis

For the most part, you can write async functions, methods, and do everything that you would do with normal Pythonfunctions. The use of the await in calls is important though–if you don’t do that, the called coroutine won’t run andyou’ll be fighting the aforementioned swarm of stinging bats trying to figure out what’s wrong.

The Coroutine Menagerie

For the most part, coroutines are centered on async function definitions. However, there are a few additional languagefeatures that are “async aware.” For example, you can define an asynchronous context manager:

from curio import run

class AsyncManager(object):async def __aenter__(self):

print('Entering')

async def __aexit__(self, ty, val, tb):print('Exiting')

async def main():m = AsyncManager()async with m:

print('Hey there!')

>>> run(main)EnteringHey there!Exiting>>>

You can also define an asynchronous iterator:

68 Chapter 1. Contents:

Page 73: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

from curio import run

class AsyncCountdown(object):def __init__(self, start):

self.start = start

def __aiter__(self):return AsyncCountdownIter(self.start)

class AsyncCountdownIter(object):def __init__(self, n):

self.n = n

async def __anext__(self):self.n -= 1if self.n <= 0:

raise StopAsyncIterationreturn self.n + 1

async def main():async for n in AsyncCountdown(5):

print('T-minus', n)

>>> run(main)T-minus 5T-minus 4T-minus 3T-minus 2T-minus 1>>>

Last, but not least, you can define an asynchronous generator as an alternative implementation of an asynchronousiterator:

from curio import run

async def countdown(n):while n > 0:

yield nn -= 1

async def main():async for n in countdown(5):

print('T-minus', n)

run(main)

An asynchronous generator feeds values to an async-for loop. In all of these cases, the essential feature enhancement isthat you can call other async-functions in the implementation. That is, since certain method such as __aenter__(),__aiter__(), and __anext__() are all async, they can use the await statement to call other coroutines asnormal functions.

Try not to worry too much about the low-level details of all of this. Stay focused on the high-level–the world of“async” programming is mainly going to involve combinations of async functions, async context managers, and asynciterators. They are all meant to work together. These are also core features of the Python language itself–they are notpart of a specific library module or runtime environment.

1.4. Developing with Curio 69

Page 74: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Blocking Calls (i.e., “System Calls”)

When a program runs, it executes statements one after the other until the services of the operating system are needed(e.g., sleeping, reading a file, receiving a network packet, etc.). For example, consider this function:

import time

def sleepy(seconds):print('Yawn. Getting sleepy.')time.sleep(seconds)print('Awake at last!')

If you call this function, you’ll see a message and the program will go to sleep for awhile. While it’s sleeping, nothingis happening at all. If you look at the CPU usage, it will show 0%. Under the hood, the program has made a “systemcall” to the operating system which has suspended the program. At some point the timer will expire and the operatingsystem will reschedule the program to run again. Just to emphasize, the time.sleep() call suspends the Pythoninterpreter entirely. At some point, Python will resume, but that’s outside of its control.

The mechanism for making a system call is different than that of a normal function in that it involves executing aspecial machine instruction known as a “trap.” A trap is basically a software-generated interrupt. When it occurs, therunning process is suspended and control is passed to the operating system kernel so that it can handle the request.There are all sorts of other magical things that happen on trap-handling, but you’re really not supposed to worry aboutit as a programmer.

Now, what does all of this have to do with coroutines? Let’s define a very special kind of coroutine:

from types import coroutine@coroutinedef sleep(seconds):

yield ('sleep', seconds)

This coroutine is different than the rest–it doesn’t use the async syntax and it makes direct use of the yieldstatement. The @coroutine decorator is there so that it can be called with await. Now, let’s write a coroutine thatuses this:

async def sleepy(seconds):print('Yawn. Getting sleepy.')await sleep(seconds)print('Awake at last!')

Let’s manually drive it using the same technique as before:

>>> c = sleepy(10)>>> request = c.send(None)Yawn. Getting sleepy.>>> request('sleep', 10)

The output from the first print() function appears, but the coroutine is now suspended. The return value of thesend() call is the tuple produced by the yield statement in the sleep() coroutine. Now, step back and thinkabout what has happened here. Focus carefully. Focus on a special place. Focus on the breath. Breathe in. . . . Breatheout. . . . . . Focus.

Basically the code has executed a trap! The yield statement caused the coroutine to suspend. The returned tuple isa request (in this case, a request to sleep for 10 seconds). It is now up the driver of the code to satisfy that request. Butwho’s driving this show? Wait, that’s YOU! So, start counting. . . “T-minus 10, T-minus 9, T-minus 8, . . . T-minus 1.”Time’s up! Put the coroutine back to work:

70 Chapter 1. Contents:

Page 75: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

>>> c.send(None)Awake at last!Traceback (most recent call last):

File "<stdin>", line 1, in <module>StopIteration

Congratulations! You just passed your first test on the way to getting a job as an operating system.

Here’s some minimal code that executes what you just did:

import timedef run(coro):

while True:try:

request, *args = coro.send(None)if request == 'sleep':

time.sleep(*args)else:

print('Unknown request:', request)except StopIteration as e:

return e.value

All of this might seem very low-level, but this is precisely what Curio is doing. Coroutines execute statements underthe supervision of a small kernel. When a coroutine executes a system call (e.g., a special coroutine that makes use ofyield), the kernel receives that request and acts upon it. The coroutine resumes once the request has completed.

Keep in mind that all of this machinery is hidden from view. The coroutine doesn’t actually know anything aboutthe run() function or use code that directly involves the yield statement. Those are low-level implementationdetails–like machine code. The coroutine simply makes a high-level call such as await sleep(10) and it will justwork. Somehow.

Coroutines and Multitasking

Let’s continue to focus on the fact that a defining feature of coroutines is that they can suspend their execution. Whena coroutine suspends, there’s no reason why the run() function needs to wait around doing nothing. In fact, it couldswitch to a different coroutine and run it instead. This is a form of multitasking. Let’s write a slightly different variantof the run() function:

from collections import dequefrom types import coroutine

@coroutinedef switch():

yield ('switch',)

tasks = deque()

def run():while tasks:

coro = tasks.popleft()try:

request, *args = coro.send(None)if request == 'switch':

tasks.append(coro)else:

print('Unknown request:', request)

1.4. Developing with Curio 71

Page 76: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

except StopIteration as e:print('Task done:', coro)

In this code, the run() function implements a simple round-robin scheduler and a single request for switching tasksas provided by the switch() coroutine. Here are some sample coroutine functions to run:

async def countdown(n):while n > 0:

print('T-minus', n)await switch()n -= 1

async def countup(stop):n = 1while n <= stop:

print('Up we go', n)await switch()n += 1

tasks.append(countdown(10))tasks.append(countup(15))run()

When you run this code, you’ll see the countdown() and countup() coroutines rapidly alternating like this:

T-minus 10Up we go 1T-minus 9Up we go 2T-minus 8Up we go 3...T-minus 1Up we go 10Task done: <coroutine object countdown at 0x102a3ee08>Up we go 11Up we go 12Up we go 13Up we go 14Up we go 15Task done: <coroutine object countup at 0x102a3ef10>

Excellent. We’re running more than one coroutine concurrently. The only catch is that the switch() function isn’tso interesting. To make this more useful, you’d need to expand the run() loop to understand more operations suchas requests to sleep and for I/O. Let’s add sleeping:

import timefrom collections import dequefrom types import coroutinefrom bisect import insort

@coroutinedef switch():

yield ('switch',)

@coroutinedef sleep(seconds):

yield ('sleep', seconds)

72 Chapter 1. Contents:

Page 77: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

tasks = deque()sleeping = [ ]

def run():while tasks:

coro = tasks.popleft()try:

request, *args = coro.send(None)if request == 'switch':

tasks.append(coro)elif request == 'sleep':

seconds = args[0]deadline = time.time() + secondsinsort(sleeping, (deadline, coro))

else:print('Unknown request:', request)

except StopIteration as e:print('Task done:', coro)

while not tasks and sleeping:now = time.time()duration = sleeping[0][0] - nowif duration > 0:

time.sleep(duration)_, coro = sleeping.pop(0)tasks.append(coro)

Things are starting to get a bit more serious now. For sleeping, the coroutine is set aside in a holding list that’s sortedby sleep expiration time (aside: the bisect.insort() function is a useful way to construct a sorted list). Thebottom part of the run() function now sleeps if there’s nothing else to do. On the conclusion of sleeping, the task isput back on the task queue.

Here are some modified tasks that sleep:

async def countdown(n):while n > 0:

print('T-minus', n)await sleep(2)n -= 1

async def countup(stop):n = 1while n <= stop:

print('Up we go', n)await sleep(1)n += 1

tasks.append(countdown(10))tasks.append(countup(15))run()

If you run this program, you should see output like this:

T-minus 10Up we go 1Up we go 2T-minus 9

1.4. Developing with Curio 73

Page 78: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Up we go 3Up we go 4T-minus 8Up we go 5Up we go 6...

You’re now well on your way to writing your own little operating system–and Curio. This is essentially the wholeidea. Curio is basically a small coroutine scheduler. In addition to sleeping, it allows coroutines to switch on otherkinds of blocking operations involving I/O, waiting on synchronization primitives, Unix signals, and so forth. Youroperating system does exactly the same thing when processes execute actual system calls. The ability to switchbetween coroutines is why they are useful for concurrent programming. This is really the big idea in a nutshell.

Coroutines versus Threads

Code written using coroutines looks very similar to code written using threads. This is by design. For example, youcould take the code in the previous section and write it to use threads like this:

import timeimport threading

def countdown(n):while n > 0:

print('T-minus', n)time.sleep(2)n -= 1

def countup(stop):n = 1while n <= stop:

print('Up we go', n)time.sleep(1)n += 1

threading.Thread(target=countdown, args=(10,)).start()threading.Thread(target=countup, args=(15,)).start()

Not only does it look almost identical, it runs in essentially the same way. Of course, nobody really cares aboutcode that counts up and down. What they really want to do is write network servers. So, here’s a more realisticthread-programming example involving sockets:

# echoserv.py

from socket import *from threading import Thread

def echo_server(address):sock = socket(AF_INET, SOCK_STREAM)sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind(address)sock.listen(5)print('Server listening at', address)with sock:

while True:client, addr = sock.accept()Thread(target=echo_client, args=(client, addr)).start()

74 Chapter 1. Contents:

Page 79: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

def echo_client(client, addr):print('Connection from', addr)with client:

while True:data = client.recv(100000)if not data:

breakclient.sendall(data)

print('Connection closed')

if __name__ == '__main__':echo_server(('',25000))

Now, here is that same code written with coroutines and Curio:

# echoserv.py

from curio import run, spawnfrom curio.socket import *

async def echo_server(address):sock = socket(AF_INET, SOCK_STREAM)sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind(address)sock.listen(5)print('Server listening at', address)async with sock:

while True:client, addr = await sock.accept()await spawn(echo_client, client, addr)

async def echo_client(client, addr):print('Connection from', addr)async with client:

while True:data = await client.recv(100000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(echo_server, ('',25000))

Both versions of code involve the same statements and have the same overall control flow. The key difference is thatthe code involving coroutines is executed entirely in a single thread by the run() function which is scheduling andswitching the coroutines on its own without any assistance from the operating system. The code using threads spawnsactual system threads (e.g., POSIX threads) that are scheduled by the operating system.

The coroutine approach has certain advantages and disadvantages. One potential advantage of the coroutine approachis that task switching can only occur on statements involving the await keyword. Thus, it might be easier to reasonabout the behavior (in contrast, threads are fully preemptive and might switch on any statement). Coroutines are alsofar more resource efficient–you can creates hundreds of thousands of coroutines without much concern. A hundredthousand threads? Good luck.

Sadly, a big disadvantage of coroutines is that any kind of long-running calculation or blocking operation can’t bepreempted. So, a coroutine might hog the CPU for an extended period and force other coroutines to wait. If you love

1.4. Developing with Curio 75

Page 80: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

staring at the so-called “beachball of death” on your laptop, coroutines are for you. The other downside is that codemust be written to explicitly take advantage of coroutines (e.g., explicit use of async and await). As a generalrule, you can’t just plug someone’s non-coroutine network package into your coroutine code and expect it to work.Threads, on the other hand, already work with most existing Python code. So, there are always going to be tradeoffs.

Coroutines versus Callbacks

For asynchronous I/O handling, libraries and frameworks will sometimes make use of callback functions. For example,here is an echo server written in the callback style using Python’s asyncio module:

import asyncio

class EchoProtocol(asyncio.Protocol):def connection_made(self, transport):

print('Got connection')self.transport = transport

def connection_lost(self, exc):print('Connection closed')self.transport = None

def data_received(self, data):self.transport.write(data)

if __name__ == '__main__':loop = asyncio.get_event_loop()coro = loop.create_server(EchoProtocol, '', 25000)srv = loop.run_until_complete(coro)loop.run_forever()

In this code, different methods of the EchoProtocol class are triggered in response to I/O events.

Programming with callbacks is a well-known technique for asynchronous I/O handling that is used in programminglanguages without proper support for coroutines. It can be efficient, but it also tends to result in code that’s describedas a kind of “callback hell”–a large number of tiny functions with no easily discerned strand of control flow tying themtogether.

Coroutines restore a lot of sanity to the overall programming model. The control-flow is much easier to follow and thenumber of required functions tends to be significantly less. In fact, the main motivation for adding async and awaitto Python and other languages is to simplify asynchronous I/O by avoiding callback hell.

Historical Perspective

Coroutines were first invented in the earliest days of computing to solve problems related to multitasking and concur-rency. Given the simplicity and benefits of the programming model, one might wonder why they haven’t been usedmore often.

A big part of this is really due to the lack of proper support in mainstream programming languages used to writesystems software. For example, languages such as Pascal, C/C++, and Java don’t support coroutines. Thus, it’s not atechnique that most programmers would even think to consider. Even in Python, proper support for coroutines took along time to emerge. Projects such as Stackless Python supported concepts related to coroutines more than 15 yearsago, but it was probably too far ahead of its time to be properly appreciated. Later on, various projects have exploredcoroutines in different forms, usually involving sneaky hacks surrounding generator functions and C extensions. Theaddition of the yield from construct in Python 3.3 greatly simplified the problem of writing coroutine libraries.The emergence of async/await in Python 3.5 takes a huge stride in making coroutines more of a first-class objectin the Python world. This is really the starting point for Curio.

76 Chapter 1. Contents:

Page 81: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.4.3 Layered Architecture

One of the most important design principles of systems programming is layering. Layering is an essential part ofunderstanding how Curio works so let’s briefly discuss this idea.

Operating System Design and Programming Libraries

Think about how I/O works in the operating system for a moment. At the lowest level, you’ll find device driversand other hardware-specific code. However, the bulk of the operating system is not written to operate at this low-level. Instead, those details are hidden behind a device-independent abstraction layer that manages file descriptors, I/Obuffering, flow control, and other details.

The same layering principal applies to user applications. The operating system provides a set of low-level systemcalls (traps). These calls vary between operating systems, but you don’t really care as a programmer. That’s becausethe implementation details are hidden behind a layer of standardized programming libraries such as the C standardlibrary, various POSIX standards, Microsoft Windows APIs, etc. Working in Python removes you even further fromplatform-specific library details. For example, a network program written using Python’s socket module will workvirtually everywhere. This is layering and abstraction in action.

Curio in a Nutshell

Curio primarily operates as a coroutine scheduling layer that sits between an application and the Python standardlibrary. This layer doesn’t actually carry out any useful functionality—it is mainly concerned with task scheduling.

1.4. Developing with Curio 77

Page 82: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Just to emphasize, the scheduler doesn’t perform any kind of I/O. There are no internal protocols, streams, buffering,or anything you’d commonly associate with the implementation of an I/O library.

To make the scheduling process work, Curio relies on non-blocking I/O. With non-blocking I/O, any system call thatwould ordinarily cause the calling process to block fails with an exception. You can try it out manually:

>>> from socket import *>>> s = socket(AF_INET, SOCK_STREAM)>>> s.bind(('',25000))>>> s.listen(1)>>> s.setblocking(False)>>> c, a = s.accept()Traceback (most recent call last):

File "<stdin>", line 1, in <module>File "/usr/local/lib/python3.5/socket.py", line 195, in acceptfd, addr = self._accept()

BlockingIOError: [Errno 35] Resource temporarily unavailable>>>

To handle the exception, the calling process has to wait for an incoming connection. Curio provides a special “trap”call for this called _read_wait(). Here’s a coroutine that uses it:

>>> from curio import run>>> from curio.traps import _read_wait>>> async def accept_connection(s):... while True:... try:... return s.accept()... except BlockingIOError:... await _read_wait(s)...>>> c, a = run(accept_connection, s)

With that code running, try making a connection using telnet, nc or similar command. You should see the run()function return the result after the connection is made.

Now, a couple of important details about what’s happening:

• The actual I/O operation is performed using the normal accept() method of a socket. It is the same methodthat’s used in synchronous code not involving coroutines.

• Curio only enters the picture if the attempted I/O operation raises a BlockingIOError exception. In that

78 Chapter 1. Contents:

Page 83: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

case, the coroutine must wait for I/O and retry the I/O operation later (the retry is why it’s enclosed in a whileloop).

• Curio does not actually perform any I/O. It is only responsible for waiting. The _read_wait() call suspendsuntil the associated socket can be read.

• Incoming I/O is not handled as an “event” nor are there any associated callback functions. If an incomingconnection is received, the coroutine is scheduled to run again. That’s it. There is no “event loop.” There are nocallback functions.

With the newly established connection, write a coroutine that receives some data:

>>> async def read_data(s, maxsize):... while True:... try:... return s.recv(maxsize)... except BlockingIOError:... await _read_wait(s)...>>> data = run(read_data c, 1024)

Try typing some input into your connection. You should see that data returned. Notice that the code is basically thesame as before. An I/O operation is attempted using the normal socket recv() method. If it fails, then the coroutinewaits using the _read_wait() call. Just to be clear. There is no event loop and Curio is not performing any I/O.Curio is only responsible for waiting–that is basically the core of it.

On the subject of waiting, here is a list of the things that Curio knows how to wait for:

• Expiration of a timer (e.g., sleeping).

• I/O operations (read, write).

• Completion of a Future from the concurrent.futures standard library.

• Arrival of a Unix signal.

• Release from a wait queue.

• Termination of a coroutine.

Everything else is built up from those low-level primitives.

The Proxy Layer

If you wanted to, you could program directly with low-level calls like _read_wait() as shown in the previous part.However, no one really wants to do that. Instead, it’s easier to create a collection of proxy objects that hide the details.For example, you could make a coroutine-based socket proxy class like this:

from curio.traps import _read_wait

class Socket(object):def __init__(self, sock):

self._sock = sockself._sock.setblocking(False)

async def accept(self):while True:

try:client, addr = self._sock.accept()return Socket(client), addr

except BlockingIOError:

1.4. Developing with Curio 79

Page 84: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await _read_wait(self._sock)

async def recv(self, maxsize):while True:

try:return self._sock.recv(maxsize)

except BlockingIOError:await _read_wait(self._sock)

# Other socket methods follow...

# Delegate other socket methodsdef __getattr__(self, name):

return getattr(self._sock, name)

This class invokes the standard socket methods, but has a small amount of extra code to deal with coroutine scheduling.Using this, your code starts to look much more normal. For example:

async def echo_server(address):sock = Socket(socket(AF_INET, SOCK_STREAM))sock.bind(address)sock.listen(1)while True:

client, addr = await sock.accept()print('Connection from', addr)await spawn(echo_client, client)

async def echo_client(sock):while True:

data = await sock.recv(100000)if not data:

breakawait sock.sendall(data)

This is exactly what’s happening with sockets in Curio. It provides a coroutine wrapper around a normal socket andlet’s you write normal-looking socket code. It doesn’t change the behavior or semantics of how sockets work.

It’s important to emphasize that a proxy doesn’t change how you interact with an object. You use the same methodnames as you did before coroutines and you should assume that they have the same underlying behavior. Curio isreally only concerned with the scheduling problem–not I/O.

Supported Functionality

For the most part, Curio tries to provide the same I/O functionality that one would typically use in a synchronousprogram involving threads. This includes sockets, subprocesses, files, synchronization primitives, queues, and variousodds-and-ends such as TLS/SSL. You should consult the reference manual or the howto guide for more details andspecific programming recipes. The rest of this document focuses more on the higher-level task model and otherprogramming considerations related to using Curio.

1.4.4 The Curio Task Model

When a coroutine runs inside Curio, it becomes a “Task.” A major portion of Curio concerns the management andcoordination of tasks. This section describes the overall task model and operations involving tasks.

80 Chapter 1. Contents:

Page 85: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Creating Tasks

An application that uses Curio is always launched by providing an initial coroutine to the run() function. Forexample:

import curio

async def main():print('Starting')...

curio.run(main)

That first coroutine becomes the initial task. If you want to create more tasks that execute concurrently, use thespawn() coroutine. spawn() is only valid inside other coroutines so you might use it to launch more tasks insidemain() like this:

import curio

async def child(n):print('Sleeping')await curio.sleep(n)print('Awake again!')

async def main():print('Starting')await curio.spawn(child, 5)

curio.run(main)

As a general rule, it’s not great style to launch a task and to simply forget about it. Instead, you should pick up itsresult at some point. Use the join() method to do that. For example:

async def main():print('Starting')task = await curio.spawn(child, 5)await task.join()print('Quitting')

If you’ve programmed with threads, the programming model is similar. One important point though—you only usespawn() if you want concurrent task execution. If a coroutine merely wants to call another coroutine in a syn-chronous manner like a library function, you just use await. For example:

async def main():print('Starting')await child(5)print('Quitting')

Returning Results

The task.join() method returns the final result of a coroutine. For example:

async def add(x, y):return x + y

async def main():

1.4. Developing with Curio 81

Page 86: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

task = await curio.spawn(add, 2,3)result = await task.join()print('Result ->', result) # Prints 5

If an exception occurs in the task, it is wrapped in a TaskError exception. This is a chained exception where the__cause__ attribute contains the actual exception that occurred. For example:

async def main():task = await curio.spawn(add, 2, 'Hello') # Fails due to TypeErrortry:

result = await task.join()except curio.TaskError as err:

# Reports the resulting TypeErrorprint('It failed. Cause:', repr(err.__cause__))

The use of TaskError serves an important, but subtle, purpose here. Due to cancellation and timeouts, the task.join() operation might raise an exception that’s unrelated to the underlying task itself. This means that you needto have some way to separate exceptions related to the join() operation versus an exception that was raised insidethe task. The TaskError solves this issue–if you get that exception, it means that the task being joined exitedwith an exception. If you get other exceptions, they are related to some aspect of the join() operation itself (i.e.,cancellation), not the underlying Task.

Task Exit

Normally, a task exits when it returns. If you’re deeply buried into the guts of a bunch of code and you want to force atask exit, raise a TaskExit exception. For example:

from curio import *

async def coro1():print('About to die')raise TaskExit()

async def coro2():try:

await coro1()except Exception as e:

print('Something went wrong')

async def coro3():await coro2()

try:run(coro3())

except TaskExit:print('Task exited')

Like the SystemExit built-in exception, TaskExit is a subclass of BaseException and won’t be caught byexception handlers that look for Exception.

If you want all tasks to die, raise a SystemExit or KernelExit exception instead. If this is raised in a task, theentire Curio kernel stops. In most situations, the leads to an orderly shutdown of all remaining tasks–each task beinggiven a cancellation request.

82 Chapter 1. Contents:

Page 87: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Task Cancellation

Curio allows any task to be cancelled. Here’s an example:

import curio

async def child(n):print('Sleeping')await curio.sleep(n)print('Awake again!')

async def main():print('Starting')task = await curio.spawn(child, 5)await time.sleep(1)await task.cancel() # Cancel the child

curio.run(main)

Cancellation only occurs on blocking operations involving the await keyword (e.g., the curio.sleep() call inthe child). When a task is cancelled, the current operation fails with a TaskCancelled exception. This exceptioncan be caught, but if doing so, you usually use its base class CancelledError:

async def child(n):print('Sleeping')try:

await curio.sleep(n)print('Awake again!')

except curio.CancelledError:print('Rudely cancelled')raise

A cancellation can be caught, but should not be ignored. In fact, the task.cancel() method blocks until the taskactually terminates. If ignored, the cancelling task would simply hang forever waiting. That’s probably not what youwant. In most cases, code that catches cancellation should perform some cleanup and then re-raise the exception asshown above.

Cancellation does not propagate to child tasks. For example, consider this code:

from curio import sleep, spawn, run, CancelledError

async def sleeper(n):print('Sleeping for', n)await sleep(n)print('Awake again')

async def coro():task = await spawn(sleeper, 10)try:

await task.join()except CancelledError:

print('Cancelled')raise

async def main():task = await spawn(coro)await sleep(1)await task.cancel()

1.4. Developing with Curio 83

Page 88: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

run(main)

If you run this code, the coro() coroutine is cancelled, but its child task continues to run afterwards. The outputlooks like this:

Sleeping for 10CancelledAwake again

To cancel children, they must be explicitly cancelled. Rewrite coro() like this:

async def coro():task = await spawn(sleeper, 10)try:

await task.join()except CancelledError:

print('Cancelled')await task.cancel() # Cancel child taskraise

Since cancellation doesn’t propagate except explicitly as shown, one way to shield a coroutine from cancellation is tolaunch it as a separate task using spawn(). Unless it’s directly cancelled, a task always runs to completion.

Daemon Tasks

Normally Curio runs tasks until all tasks have completed. As an option, you can launch a so-called “daemon” task.For example:

async def spinner():while True:

print('Spinning')await sleep(5)

async def main():await spawn(spinner, daemon=True)await sleep(20)print('Main. Goodbye')

run(main) # Runs until main() returns

A daemon task runs in the background, potentially forever. The Kernel.run() method will execute tasks until allnon-daemon tasks are finished. If you call the kernel run() method again with a new coroutine, the daemon taskswill still be there. If you shut down the kernel, the daemon tasks are cancelled. Note: the high-level run() functionperforms a shutdown so it would shut down all of the daemon tasks on your behalf.

Timeouts

Curio allows every blocking operation to be aborted with a timeout. However, instead of instrumenting every pos-sible API call with a timeout argument, it is applied through timeout_after(seconds [, coro]). Thespecified timeout serves as a completion deadline for the supplied coroutine. For example:

from curio import *

84 Chapter 1. Contents:

Page 89: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async def child():print('Yawn. Getting sleeping')await sleep(10)print('Back awake')

async def main():try:

await timeout_after(1, child)except TaskTimeout:

print('Timeout')

run(main)

After the specified timeout period expires, a TaskTimeout exception is raised by whatever blocking operationhappens to be in progress. TaskTimeout is a subclass of CancelledError so code that catches the latter ex-ception can be used to catch both kinds of cancellation. It is critical to emphasize that timeouts can only occur onoperations that block in Curio. If the code runs away to go mine bitcoins for the next ten hours, a timeout won’t beraised–remember that coroutines can’t be preempted except on blocking operations.

The timeout_after() function can also be used as a context manager. This allows it to be applied to an entireblock of statements. For example:

try:async with timeout_after(5):

await coro1()await coro2()...

except TaskTimeout:print('Timeout')

Sometimes you might just want to stop an operation and silently move on. For that, you can use theignore_after() function. It works like timeout_after() except that it doesn’t raise an exception. Forexample:

result = await ignore_after(seconds, coro)

In the event of a timeout, the return result is None. So, instead of using try-except, you could do this:

if await ignore_after(seconds, coro) == None:print('Timeout')

The ignore_after() function also works as a context-manager. When used in this way, a expired attribute isset to True when a timeout occurs. For example:

async with ignore_after(seconds) as t:await coro1()await coro2()

if t.expired:print('Timeout')

Nested Timeouts

Timeouts can be nested, but the semantics are a bit hair-raising and surprising at first. To illustrate, consider this bit ofcode:

1.4. Developing with Curio 85

Page 90: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async def coro1():print('Coro1 Start')await sleep(10)print('Coro1 Success')

async def coro2():print('Coro2 Start')await sleep(1)print('Coro2 Success')

async def child():try:

await timeout_after(50, coro1)except TaskTimeout:

print('Coro1 Timeout')

await coro2()

async def main():try:

await timeout_after(5, child)except TaskTimeout:

print('Parent Timeout')

In this code, an outer coroutine main() applies a 5-second timeout to an inner coroutine child(). Internally,child() applies a 50-second timeout to another coroutine coro1(). If you run this program, the outer timeoutfires, but the inner one remains silent. You’ll get this output:

Coro1 StartParent Timeout (appears after 5 seconds)

To understand this output and why the 'Coro1 Timeout' message doesn’t appear, there are some important rulesin play. First, the actual timeout period in effect is always the smallest of all of the applied timeout values. In thiscode, the outer main() coroutine applies a 5 second timeout to the child() coroutine. Even though the child()coroutine attempts to apply a 50 second timeout to coro1(), the 5 second expiration of the outer timeout is kept inforce. This is why coro1() is cancelled when it sleeps for 10 seconds.

The second rule of timeouts is that only the outer-most timeout that expires receives a TaskTimeout excep-tion. In this case, the timeout_after(5) operation in main() is the timeout that has expired. Thus,it gets the exception. The inner call to timeout_after(50) also aborts with an exception, but it is aTimeoutCancellationError. This signals that the code is being cancelled due to a timeout, but not the onethat was requested. That is, the operation is NOT being cancelled due to 50 seconds passing. Instead, some kind ofouter timeout is responsible. Normally, TimeoutCancellationError would not be caught. Instead, it silentlypropagates to the outer timeout which handles it.

Admittedly, all of this is a bit subtle, but the key idea is that an outer timeout is always allowed to cancel an inner time-out. Moreover, the TaskTimeout exception will only arise out of the timeout_after() call that has expired.This arrangement allows for tricky corner cases such as this example:

async def child():while True:

try:result = await timeout_after(1, coro)...

except TaskTimeout:print('Timed out. Retrying')

86 Chapter 1. Contents:

Page 91: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async def parent():try:

await timeout_after(5, child)except TaskTimeout:

print('Timeout')

In this code, it might appear that child() will never terminate due to the fact that it catches TaskTimeout ex-ceptions and continues to loop forever. Not so–when the timeout_after() operation in parent() expires, aTimeoutCancellationError is raised in child() instead. This causes the loop to stop.

There are still some ways that timeouts can go wrong and you’ll find yourself battling a sky full of swooping mantarays. The best way to make your head explode is to catch TaskTimeout exceptions in code that doesn’t usetimeout_after(). For example:

async def child():while True:

try:print('Sleeping')await sleep(10)

except TaskTimeout:print('Ha! Nope.')

async def parent():try:

await timeout_after(5, child)except TaskTimeout:

print('Timeout')

In this code, the child() catches TaskTimeout, but basically ignores it–running forever. The parent() corou-tine will hang forever waiting for the child() to exit. The output of the program will look like this:

SleepingHa! Nope. (after 5 seconds)SleepingSleeping... forever...

Bottom line: Don’t catch free-floating TaskTimeout exceptions unless your code immediately re-raises them.

Optional Timeouts

As a special case, you can also supply None as a timeout for the timeout_after() and ignore_after()functions. For example:

await timeout_after(None, coro)

When supplied, this leaves any previously set outer timeout in effect. If an outer timeout expires, aTimeoutCancellationError is raised. If no timeout is effect, it does nothing.

The primary use case of this is to more cleanly write code that involves an optional timeout setting. For example:

async def func(..., timeout=None):try:

async with timeout_after(timeout):statements...

except TaskTimeout as e:

1.4. Developing with Curio 87

Page 92: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

# Timeout occurred directly due to the supplied timeout argument...

except TimeoutCancellationError as e:# Timeout occurred, but it was due to an outer timeout# (Normally you wouldn't catch this exception)...raise

Without this feature, you would have to special case the timeout. For example:

async def func(..., timeout=None):if timeout:

# Code with a timeout appliedtry:

async with timeout_after(timeout):statements...

except TaskTimeout as e:# Timeout occurred directly due to the supplied timeout argument...

else:# Code without a timeout appliedstatements...

That’s rather ugly–don’t do that. Prefer to use timeout_after(None) to deal with an optional timeout.

Cancellation Control

Sometimes it is advantageous to block the delivery of cancellation exceptions at specific points in your code. Per-haps your program is performing a critical operation that shouldn’t be interrupted. To block cancellation, use thedisable_cancellation() function as a context manager like this:

async def coro():...async with disable_cancellation():

await op1()await op2()...

await blocking_op() # Cancellation delivered here (if any)

When used, the enclosed statements are guaranteed to never abort with a CancelledError exception (this includestimeouts). If any kind of cancellation request has occurred, it won’t be processed until the next blocking operationoutside of the context manager.

If you are trying to shield a single operation, you can also pass a coroutine to disable_cancellation() likethis:

async def coro():...await disable_cancellation(op)...

Code that disables cancellation can explicitly poll for the presence of a cancellation request usingcheck_cancellation() like this:

88 Chapter 1. Contents:

Page 93: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async def coro():...async with disable_cancellation():

while True:await op1()await op2()...

if await check_cancellation():break # We're done

await blocking_op() # Cancellation delivered here (if any)

The check_cancellation() function returns the pending exception. You can use the result to find out morespecific information if you want. For example:

async def coro():...async with disable_cancellation():

while True:await op1()await op2()...

cancel_exc = await check_cancellation()if isinstance(cancel_exc, TaskTimeout):

print('Time expired (shrug)')await set_cancellation(None)

else:break

await blocking_op() # Cancellation delivered here (if any)

The set_cancellation() function can be used to clear or change the pending cancellation exception to some-thing else. The above code ignores the TaskTimeout exception and keeps running.

When cancellation is disabled, it can be selectively enabled again using enable_cancellation() like this:

async def coro():...async with disable_cancellation():

while True:await op1()await op2()

async with enable_cancellation():# These operations can be cancelledawait op3()await op4()

if await check_cancellation():break # We're done

await blocking_op() # Cancellation delivered here (if any)

When cancellation is re-enabled, it allows the enclosed statements to receive cancellation requests and timeouts asexceptions as normal.

An important feature of enable_cancellation() is that it does not propagate cancellation exceptions–meaningthat it does not allow such exceptions to be raised in the outer block of statements where cancellation is disabled.

1.4. Developing with Curio 89

Page 94: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Instead, if there is a cancellation, it becomes “pending” at the conclusion of the enable_cancellation() con-text. It will be delivered at the next blocking operation where cancellation is allowed. Here is a concrete example thatillustrates this behavior:

async def coro():async with disable_cancellation():

print('Hello')async with enable_cancellation():

print('About to die')raise CancelledError()print('Never printed')

print('Yawn')await sleep(2)

print('About to deep sleep')await sleep(5000)

run(coro)

If you run this code, you’ll get output like this:

HelloAbout to dieYawnAbout to deep sleepTraceback (most recent call last):...curio.errors.CancelledError

Carefully observe that cancellation is being reported on the first blocking operation outside thedisable_cancellation() block. There will be a quiz later.

It is fine for disable_cancellation() blocks to be nested. This makes them safe for use in subroutines. Forexample:

async def coro1():async with disable_cancellation():

await coro2()

await blocking_op1() # <-- Cancellation reported here

async def coro2():async with disable_cancellation():

...

await blocking_op2()

run(coro1)

If nested, cancellation is reported at the first blocking operation that occurs when cancellation is re-enabled.

It is illegal for enable_cancellation() to be used outside of a disable_cancellation() context. Doingso results in a RuntimeError exception. Cancellation is normally enabled in Curio so it makes little sense to usethis feature in isolation. Correct usage also tends to require careful coordination with code in which cancellation isdisabled. For that reason, it can’t be used by itself.

It is also illegal for any kind of cancellation exception to be raised in a disable_cancellation() context. Forexample:

90 Chapter 1. Contents:

Page 95: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

async def coro():async with disable_cancellation():

...raise CancelledError() # ILLEGAL...

Doing this causes your program to die with a RuntimeError. The disable_cancellation() feature is meantto be a strong guarantee that cancellation-related exceptions are not raised in the given block of statements. If youraise such an exception, you’re violating the rules.

It is legal for cancellation exceptions to be raised inside a enable_cancellation() context. For example:

async def coro():async with disable_cancellation():

...async with enable_cancellation():

...raise CancelledError() # LEGAL

# Exception becomes "pending" here...

await blocking_op() # Cancellation reported here

Cancellation exceptions that escape enable_cancellation() become pending and are reported when blockingoperations are performed later.

Programming Considerations for Cancellation

Cancellation and timeouts are an important part of Curio and there are a few considerations to keep in mind whenwriting library functions.

If you need to perform some kind of cleanup action such as killing a helper task, you’ll probably want to wrap it in atry-finally block like this:

async def coro():task = await spawn(helper)try:

...finally:

await task.cancel()

This will make sure you properly clean up after yourself. Certain objects might work as asynchronous context man-agers. Prefer to use that if available. For example:

async def coro():task = await spawn(helper)async with task:

...# task cancelled here

If you must catch cancellation errors, make sure you re-raise them. It’s not legal to simply ignore cancellation. Correctcancellation handling code will typically look like this:

async def coro():try:

...

1.4. Developing with Curio 91

Page 96: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

except CancelledError:# Some kind of cleanup...raise

If you are going to perform cleanup actions in response to cancellation or timeout, be extremely careful with blockingoperations in exception handlers. In rare instances, it’s possible that your code could receive ANOTHER cancellationexception while it’s handling the first one (e.g., getting a direct cancellation request while handling a timeout). Here’swhere things might go terribly wrong:

async def coro():try:

...except CancelledError:

...await blocking_op() # Could receive cancellation/timeoutother_op() # Won't executeraise

If that happens, the sky will suddenly turn black from an incoming swarm of howling locusts. It will not end wellas you try to figure out what combination of mysterious witchcraft led to part of your exception handler not fullyexecuting. If you absolutely must block to perform a cleanup action, shield that operation from cancellation like this:

async def coro():try:

...except CancelledError:

...await disable_cancellation(blocking_op) # Will not be cancelledother_op() # Will executeraise

You might consider writing code that returns partially completed results on cancellation. Partial results can be attachedto the resulting exception. For example:

async def sendall(sock, data):bytes_sent = 0try:

while data:nsent = await sock.send(data)bytes_sent += nsentdata = data[nsent:]

except CancelledError as e:e.bytes_sent = bytes_sentraise

This allows code further up the call-stack to take action and maybe recover in some sane way. For example:

async def send_message(sock, msg):try:

await sendall(sock, msg)except TaskTimeout as e:

print('Well, that sure is slow')print('Only sent %d bytes' % e.bytes_sent)

Finally, be extremely careful writing library code that involves infinite loops. You will need to make sure that the codecan terminate through cancellation in some manner. This either means making sure than cancellation is enabled (the

92 Chapter 1. Contents:

Page 97: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

default) or explicitly checking for it in the loop using check_cancellation(). For example:

async def run_forever():while True:

await coro()...if await check_cancellation():

break

Just to emphasize, you normally don’t need to check for cancellation by default though–you’d only need this if it weredisabled prior to calling run_forever().

Waiting for Multiple Tasks and Concurrency

When a task is launched using spawn(), it executes concurrently with the creating coroutine. If you need to wait forthe task to finish, you normally use join() as described in the previous section.

If you create multiple tasks, you might want to wait for them to complete in more advanced ways. For example,obtaining results one at a time in the order that tasks finish. Or waiting for the first result to come back and cancellingthe remaining tasks afterwards.

For these kinds of problems, you can create a TaskGroup instance. Here is an example that obtains results in theorder that tasks are completed:

async def main():async with TaskGroup() as g:

# Create some tasksawait g.spawn(coro1)await g.spawn(coro2)await g.spawn(coro3)async for task in g:

try:result = await task.join()print('Success:', result)

except TaskError as e:print('Failed:', e)

To wait for any task to complete and to have remaining tasks cancelled, modify the code as follows:

async def main():async with TaskGroup(wait=any) as g:

# Create some tasksawait g.spawn(coro1)await g.spawn(coro2)await g.spawn(coro3)

# Get result on first completed taskresult = g.completed.result()

If any task in a task group fails with an unexpected exception, all of the tasks in the group are cancelled and aTaskGroupError exception is raised. This exception contains more information about what happened includingall of the tasks that failed. For example:

async def bad1():raise ValueError('Bad value')

async def bad2():

1.4. Developing with Curio 93

Page 98: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

raise RuntimeError('Whoa!')

async def main():try:

async with TaskGroup() as g:await g.spawn(bad1)await g.spawn(bad2)

except TaskGroupError as e:print(e.errors) # The set { ValueError, RuntimeError }

# Iterate over all failed tasks and print their exceptionfor task in e:

print(task, e)

If a task group is cancelled while waiting, all tasks in the group are also cancelled.

Sometimes you might want to launch long-running tasks into a group, not knowing when they will finish. Thiscommonly occurs in server code. One way to manage this is to launch the server into its own task under the group andthen reap child tasks one at a time as they complete. For example:

async def client(conn):...

async def server(group):while True:

conn = await get_next_connection()await group.spawn(client, conn)

async def main():async with TaskGroup() as g:

await g.spawn(server, g)async for task in g:

# task is a completed task. Could look at it or ignore itpass

This might look a bit unusual, but it will keep the group from filling up with dead tasks and it still allows cancellationto kill everything in the group if you want.

Getting a Task Self-Reference

When a coroutine is running in Curio, there is always an associated Task instance. It is returned by the spawn()function. For example:

task = await spawn(coro)

The Task instance is normally only needed for operations involving joining or cancellation and typically those stepsare performed in the same code that called spawn(). If for some reason, you need the Task instance and don’t havea reference to it available, you can use current_task() like this:

from curio import current_task

async def coro():# Get the Task that's running metask = await current_task() # Get Task instance...

94 Chapter 1. Contents:

Page 99: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Here’s a more interesting example of a function that applies a watchdog to the current task, cancelling it if nothinghappens within a certain time period:

from curio import *

async def watchdog(interval):task = await current_task()async def watcher():

while not task.terminated:cycles = task.cyclesawait sleep(interval)if cycles == task.cycles:

print('Cancelling', task)await task.cancel()

await spawn(watcher)

async def coro():await watchdog(30) # Enable a watchdog timerawait sleep(10000)

run(coro)

In this code, you can see how current_task() is used to get a Task self-reference in the watchdog() coroutine.watchdog() then uses it to monitor the number of execution cycles completed and to issue a cancellation if nothingseems to be happening.

At a high level, obtaining a task self-reference simplifies the API. For example, the coro() code merely callswatchdog(30). There’s no need to pass an extra Task instance around in the API–it can be easily obtained ifit’s needed.

1.4.5 Programming with Threads

Asynchronous I/O is often viewed as an alternative to thread programming (e.g., Threads Bad!). However, it’s reallynot an either-or question. Threads are still useful for a variety of things. In this section, we look at some strategies forprogramming and interacting with threads in Curio.

Execution of Blocking Operations

Blocking operations are a serious problem for any asynchronous code. Of particular concern are calls to normalsynchronous functions that might perform some kind of hidden I/O behind the scenes. For example, suppose you hadsome code like this:

import socket

async def handler(client, addr):hostinfo = socket.gethostbyaddr(addr[0])...

In this code, the gethostbyaddr() function performs a reverse-DNS lookup on an address. It’s not CPU intensive,but while it completes, it’s going to completely block the Curio kernel loop from executing any other work. It’s notthe sort of thing that you’d want in your program. Under heavy load, you might find your program to be sort of glitchyor laggy.

To fix the problem, you could rewrite the operation entirely using asynchronous I/O operations. However, that’s notalways practical. So, an alternative approach is to offload it to a background thread using run_in_thread() like

1.4. Developing with Curio 95

Page 100: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

this:

import socketfrom curio import run_in_thread

async def handler(client, addr):hostinfo = await run_in_thread(socket.gethostbyaddr, addr[0])...

In this code, the execution of gethostbyaddr() takes place in its own thread, freeing the Curio kernel loop towork on other tasks in the meantime.

Internally, Curio maintains a pool of preallocated threads dedicated for performing synchronous operations like this(by default the pool consists of 64 worker threads). The run_in_thread() function uses this pool. You’re notreally supposed to worry about those details though.

Various parts of Curio use run_in_thread() behind the scenes. For example, the curio.socket moduleprovides replacements for various blocking operations:

from curio import socket

async def handler(client, addr):hostinfo = await socket.gethostbyaddr(addr[0]) # Uses threads...

Another place where threads are used internally is in file I/O with standard files on the file system. For example, if youuse the Curio aopen() function:

from curio import aopen

async def coro(filename):async with aopen(filename) as f:

data = await f.read()...

In this code, it might appear as if asynchronous I/O is being performed on files. Not really–it’s all smoke and mirrorswith background threads (if you must know, this approach to files is not unique to Curio though).

One caution with run_in_thread() is that it should probably only be used on operations where there is an expec-tation of it completing in the near future. Technically, you could use it to execute blocking operations that might waitfor long time periods. For example, waiting on a thread-event:

import threadingfrom curio import run_in_thread

evt = threading.Event() # A thread-event (not Curio)

async def worker():await run_in_thread(evt.wait) # Danger...

Yes, this “works”, but it also consumes a worker thread and makes it unavailable for other use as long as it waits for theevent. If you launched a large number of worker tasks, there is a possibility that you would exhaust all of the availablethreads in Curio’s internal thread pool. At that point, all further run_in_thread() operations will block and yourcode will likely deadlock. Don’t do that. Reserve the run_in_thread() function for operations that you know arebasically going to run to completion at that moment.

For blocking operations involving a high degree of concurrency and usage of shared resources such as thread locksand events, prefer to use block_in_thread() instead. For example:

96 Chapter 1. Contents:

Page 101: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

import threadingfrom curio import block_in_thread

evt = threading.Event() # A thread-event (not Curio)

async def worker():await block_in_thread(evt.wait) # Better...

block_in_thread() still uses a background thread, but only one background thread is used regardless ofhow many tasks try to execute the same callable. For example, if you launched 1000 worker tasks and they allcalled block_in_thread(evt.wait) on the same event, they are serviced by a single thread. If you usedrun_in_thread(evt.wait) instead, each request would use its own thread and you’d exhaust the threadpool. It is important to note that this throttling is based on each unique callable. If two different workers usedblock_in_thread() on two different events, then they each get their own background thread because the evt.wait() operation would represent a different callable.

Behind the scenes, block_in_thread() coordinates and throttles tasks using a semaphore. You can use a similartechnique more generally for throttling the use of threads (or any resource). For example:

from curio import run_in_thread, Semaphore

throttle = Semaphore(5) # Allow 5 workers to use threads at once

async def worker():async with throttle:

await run_in_thread(some_callable)...

Threads and Cancellation

Both the run_in_thread() and block_in_thread() functions allow the pending operation to be cancelled.However, if the operation in question has already started execution, it will fully run to completion behind the scenes.Sadly, threads do not provide any mechanism for cancellation. Thus, there is no way to make them stop running oncethey’ve started.

If work submitted to a thread is cancelled, Curio sets the thread aside and removes it from Curio’s internal thread pool.The thread will continue to run to completion, but at least it won’t block progress of future operations submitted torun_in_thread(). Once the work completes, the thread will self-terminate. Be aware that there is still a chanceyou could make Curio consume a lot of background threads if you submitted a large number of long-running tasks andhad them all cancelled. Here’s an example:

from curio import ignore_after, run_in_thread, runimport time

async def main():for i in range(1000):

await ignore_after(0.01, run_in_thread(time.sleep, 100))

run(main)

In this code, Curio would spin up 1000 background worker threads–all of which end up as “zombies” just waiting tofinish their work (which is now abandoned because of the timeout). Try not to do this.

The run_in_thread() and block_in_thread() functions optionally allow a cancellation callback functionto be registered. This function will be triggered in the event of cancellation and gives a thread an opportunity to

1.4. Developing with Curio 97

Page 102: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

perform some kind of cleanup action. For example:

import time

def add(x, y):time.sleep(10)return x + y

def on_cancel(future):print('Where did everyone go?')print('Result was:', future.result())

async def main():await ignore_after(1, run_in_thread(add, 2, 3, call_on_cancel=on_cancel))print('Yawn!')await sleep(20)print('Goodbye')

run(main)

If you run this code, you’ll get output like this:

Yawn!Where did everyone go?Result was: 5Goodbye

The function given to call_on_cancel is a synchronous function that receives the underlying Future instancethat was being used to execute the background work. This function executes in the same thread that was performingthe work itself.

The call_on_cancel functionality is critical for certain kinds of operations where the cancellation of a threadwould cause unintended mayhem. For example, if you tried to acquire a thread lock using run_in_thread(), youshould probably do this:

import threading

lock = threading.Lock()

async def coro():await run_in_thread(lock.acquire,

call_on_cancel=lambda fut: lock.release())...await run_in_thread(lock.release)

If you don’t do this and the operation got cancelled, the thread would run to completion, the lock would be acquired,and then nobody would be around to release it again. The call_on_cancel argument is a safety net that ensuresthat the lock gets released in the event that Curio is no longer paying attention.

Thread-Task Synchronization

Acknowledging the reality that some work still might have to be performed by threads, even in code that uses asyn-chronous I/O, you may faced with the problem of coordinating Curio tasks and external threads in some way.

One problem concerns task-thread coordination on thread locks and events. Generally, it’s not safe for coroutines towait on a foreign thread lock. Doing so can block the whole underlying kernel and everything will come to a grindinghalt. To wait on a foreign lock, use the abide() function. For example:

98 Chapter 1. Contents:

Page 103: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

import threadingfrom curio import abide

lock = threading.Lock()

# Curio taskasync def coro():

async with abide(lock):# Critical section...

# Synchronous code (in a thread)def func():

with lock:# Critical section...

abide() adapts a foreign lock to an asynchronous context-manager and guides its execution using a backing thread.Internally, abide() is using an asynchronous context manager that is roughly equivalent to this:

class AbideManager(object):def __init__(self, manager):

self.manager = manager

async def __aenter__(self):curio.block_in_thread(self.manager.__enter__)return self

async def __aexit__(self, *args):curio.run_in_thread(self.manager.__exit__, *args)

The exact details vary due to some tricky corner cases, but the overall gist is that threads are used to run it and it won’tblock the Curio kernel.

You can use abide() with any foreign Lock or Semaphore object (e.g., it also works with locks defined in themultiprocessing module). abide() tries to be efficient with how it utilizes threads. For example, if you spawnup 10000 Curio tasks and have them all wait on the same lock, only one backing thread gets used.

abide() can work with reentrant locks and condition variables, but there are some issues concerning the backingthread used to execute the various locking operations. In this case, the same thread needs to be used for all operations.To indicate this, use the reserve_thread keyword argument:

import threading

cond = threading.Condition()

# Curio taskasync def coro():

async with abide(cond, reserve_thread=True) as c:# c is a wrapped version of cond() with async methods...# Executes on the same thread as used to acquire condawait c.wait()

# Synchronous code (in a thread)def func():

with cond:...

1.4. Developing with Curio 99

Page 104: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

cond.notify()...

When the reserve_thread() option is used, a background thread is reserved for the entire execution of thewith-block. Be aware that a high degree of concurrency could cause a lot of threads to be used.

As of this writing, Curio can synchronize with an RLock, but full reentrancy is not supported–that is nested abide()calls on the same lock won’t work correctly. This limitation may be lifted in a future version.

abide() also works with operations involving events. For example, here is how you wait for an event:

import threading

evt = threading.Event() # Thread event

async def waiter():await abide(evt.wait)print('Awake!')

A curious aspect of abide() is that it also works with Curio’s own synchronization primitives. So, this code alsoworks fine:

import curio

lock = curio.Lock()

# Curio taskasync def coro():

async with abide(lock):# Critical section...

If the provided lock already works asynchronously, abide() turns into an identity function. That is, it doesn’t reallydo anything. For lack of a better description, this gives you the ability to have a kind of “duck-synchronization” inyour program. If a lock looks like a lock, abide() will probably work with it regardless of where it came from.

Finally, a caution: having Curio synchronize with foreign locks is not the fastest thing. There are backing threads anda fair bit of communication across the async-synchronous boundary. If you’re doing a bunch of fine-grained lockingwhere performance is critical, don’t use abide(). In fact, try to do almost anything else.

Thread-Task Queuing

If you must bridge the world of asynchronous tasks and threads, perhaps the most sane way to do it is to use a queue.Curio provides a modestly named UniversalQueue class that does just that. Basically, a UniversalQueue isa queue that fully supports queuing operations from any combination of threads or tasks. For example, you can haveasync worker tasks reading data written by a producer thread:

from curio import run, UniversalQueue, spawn, run_in_thread

import timeimport threading

# An async taskasync def consumer(q):

print('Consumer starting')while True:

item = await q.get()

100 Chapter 1. Contents:

Page 105: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

if item is None:break

print('Got:', item)await q.task_done()

print('Consumer done')

# A threaded producerdef producer(q):

for i in range(10):q.put(i)time.sleep(1)

q.join()print('Producer done')

async def main():q = UniversalQueue()

t1 = await spawn(consumer, q)t2 = threading.Thread(target=producer, args=(q,))t2.start()await run_in_thread(t2.join)await q.put(None)await t1.join()

run(main)

Or you can flip it around and have a threaded consumer read data from async tasks:

from curio import run, UniversalQueue, spawn, run_in_thread, sleep

import threading

def consumer(q):print('Consumer starting')while True:

item = q.get()if item is None:

breakprint('Got:', item)q.task_done()

print('Consumer done')

async def producer(q):for i in range(10):

await q.put(i)await sleep(1)

await q.join()print('Producer done')

async def main():q = UniversalQueue()

t1 = threading.Thread(target=consumer, args=(q,))t1.start()t2 = await spawn(producer, q)

await t2.join()await q.put(None)

1.4. Developing with Curio 101

Page 106: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await run_in_thread(t1.join)

run(main)

Or, if you’re feeling particularly diabolical, you can even use a UniversalQueue to communicate between tasksrunning in two different Curio kernels:

from curio import run, UniversalQueue, sleep

import threading

# An async taskasync def consumer(q):

print('Consumer starting')while True:

item = await q.get()if item is None:

breakprint('Got:', item)await q.task_done()

print('Consumer done')

# An async taskasync def producer(q):

for i in range(10):await q.put(i)await sleep(1)

await q.join()print('Producer done')

def main():q = UniversalQueue()

t1 = threading.Thread(target=run, args=(consumer, q))t1.start()t2 = threading.Thread(target=run, args=(producer, q))t2.start()t2.join()q.put(None)t1.join()

main()

The programming API is the same in both worlds. For synchronous code, you use the get() and put() methods.For asynchronous code, you use the same methods, but preface them with an await.

The underlying implementation is efficient for a large number of waiting asynchronous tasks. There is no differ-ence between a single task waiting for data and ten thousand tasks waiting for data. Obviously the situation is abit different for threads (you probably wouldn’t want to have 10000 threads waiting on a queue, but if you did, anUniversalQueue would still work).

One notable feature of UniversalQueue is that it is cancellation and timeout safe on the async side. For example,you can write code like this:

# An async taskasync def consumer(q):

print('Consumer starting')while True:

102 Chapter 1. Contents:

Page 107: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

try:item = await timeout_after(5, q.get)

except TaskTimeout:print('Timeout!')continue

if item is None:break

print('Got:', item)await q.task_done()

print('Consumer done')

In the event of a timeout, the q.get() operation will abort, but no queue data is lost. Should an item be madeavailable, the next q.get() operation will return it. This is different than performing get operations on a standardthread-queue. For example, if you used run_in_thread(q.get) to get an item on a standard thread queue, atimeout or cancellation actually causes a queue item to be lost.

Asynchronous Threads

Come closer. No, I mean real close. Let’s have a serious talk about threads for a moment. If you’re going to write aSERIOUS thread program, you’re probably going to want a few locks. And once you have a few locks, you’ll probablywant some semaphores. Those semaphores are going to be lonely without a few events and condition variables to keepthem company. All these things will live together in a messy apartment along with a pet queue. It will be chaos. Itall sounds a bit better if you put in an internet-connected coffee pot and call the apartment a coworking space. But, Idigress.

But wait a minute, Curio already provides all of these wonderful things. Locks, semaphores, events, condition vari-ables, pet queues and more. You might think that they can only be used for this funny world of coroutines though. No!“Get out!”

Let’s start with a little thread code:

import time

def worker(name, lock, n, interval):while n > 0:

with lock:print('%s working %d' % (name, n))time.sleep(interval)n -= 1

def main():from threading import Thread, Semaphore

s = Semaphore(2)t1 = Thread(target=worker, args=('curly', s, 2, 2))t1.start()t2 = Thread(target=worker, args=('moe', s, 4, 1))t2.start()t3 = Thread(target=worker, args=('larry', s, 8, 0.5))t3.start()

t1.join()t2.join()t3.join()

if __name__ == '__main__':start = time.time()

1.4. Developing with Curio 103

Page 108: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

main()print('Took %s seconds' % (time.time() - start))

In this code, there are three workers. They operate on different time intervals, but they all execute concurrently.However, there is a semaphore thrown into the mix to throttle them so that only two workers can run at once. Theoutput might vary a bit due to thread scheduling, but it could look like this:

curly working 2moe working 4moe working 3curly working 1moe working 2moe working 1larry working 8larry working 7larry working 6larry working 5larry working 4larry working 3larry working 2larry working 1Took 8.033247709274292 seconds

Each worker performs about 4 seconds of execution. However, only two can run at once. So, the total execution timewill be more than 6 seconds. We see that.

Now, take that code and only change the main() function:

async def main():from curio import Semaphorefrom curio.thread import AsyncThread

s = Semaphore(2)t1 = AsyncThread(target=worker, args=('curly', s, 2, 2))await t1.start()t2 = AsyncThread(target=worker, args=('moe', s, 4, 1))await t2.start()t3 = AsyncThread(target=worker, args=('larry', s, 8, 0.5))await t3.start()await t1.join()await t2.join()await t3.join()

if __name__ == '__main__':from curio import runrun(main)

Make no other changes and run it in Curio. You’ll get very similar output. The scheduling will be a bit different, butyou’ll get something comparable:

curly working 2moe working 4larry working 8moe working 3larry working 7curly working 1larry working 6moe working 2

104 Chapter 1. Contents:

Page 109: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

larry working 5moe working 1larry working 4larry working 3larry working 2larry working 1Took 6.5362467765808105 seconds

Very good. But, wait a minute? Did you just run some unmodified synchronous thread function (worker()) withinCurio? Yes, yes, you did. That function not only performed a blocking operation (time.sleep()), it also used asynchronous context-manager on a Curio Semaphore object just like it did when it used a Semaphore from thethreading module. What devious magic is this???

In short, an asynchronous thread is a real-life fully realized thread. A POSIX thread. A thread created with thethreading module. Yes, one of THOSE threads your parents warned you about. You can perform blocking op-erations and everything else you might do in this thread. However, sitting behind this thread is a Curio task. That’sthe magic part. This hidden task takes over and handles any kind of operation you might perform on synchronizationobjects that originate from Curio. That Semaphore object you passed in was handled by that task. So, in the worker,there was this code fragment:

with lock:print('%s working %d' % (name, n))time.sleep(interval)n -= 1

The code sitting behind the with lock: part executes in a Curio backing task. The body of statement runs in thethread.

It gets more wild. You can have both Curio tasks and asynchronous threads sharing synchronization primitives. Forexample, this code also works fine:

import timeimport curio

# A synchronous worker (traditional thread programming)def worker(name, lock, n, interval):

while n > 0:with lock:

print('%s working %d' % (name, n))time.sleep(interval)n -= 1

# An asynchronous workerasync def aworker(name, lock, n, interval):

while n > 0:async with lock:

print('%s working %d' % (name, n))await curio.sleep(interval)n -= 1

async def main():from curio.thread import AsyncThreadfrom curio import Semaphore

s = Semaphore(2)

# Launch some async-threadst1 = AsyncThread(target=worker, args=('curly', s, 2, 2))

1.4. Developing with Curio 105

Page 110: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await t1.start()t2 = AsyncThread(target=worker, args=('moe', s, 4, 1))await t2.start()

# Launch a normal curio taskt3 = await curio.spawn(aworker, 'larry', s, 8, 0.5)

await t1.join()await t2.join()await t3.join()

Just to be clear, this code involves asynchronous tasks and threads sharing the same synchronization primitive and allexecuting concurrently. No problem.

It gets better. You can use await in an asynchronous thread if you use the AWAIT() function. For example, considerthis code:

from curio.thread import await, AsyncThreadimport curio

# A synchronous functiondef consumer(q):

while True:item = AWAIT(q.get()) # <- !!!!if not item:

breakprint('Got:', item)

print('Consumer done')

async def producer(n, q):while n > 0:

await q.put(n)await curio.sleep(1)n -= 1

await q.put(None)

async def main():q = curio.Queue()

t = AsyncThread(target=consumer, args=(q,))await t.start()await producer(10, q)await t.join()

if __name__ == '__main__':curio.run(main)

Good Guido, what madness is this? The code creates a Curio Queue object that is used from both a task and anasynchronous thread. Since queue operations normally require the use of await, it’s used in both places. In theproducer() coroutine, you use await q.put(n) to put an item on the queue. In the consumer() function,you use AWAIT(q.get()) to get an item. There’s a bit of asymmetry there, but consumer() is just a normalsynchronous function. You can’t use the await keyword in such a function, but Curio provides a function that takesits place. All is well. Maybe.

And on a related note, why is it AWAIT() in all-caps like that? Mostly it’s because of all of those coders whocontinuously and loudly rant about how you should never program with threads. Forget that. Clearly they have neverseen async threads before. It’s AWAIT! AWAIT! AWAIT! It’s shouted so it can be more clearly heard above all of thatranting. To be honest, it’s also pretty magical–so maybe it’s not such a bad thing for it to jump out of the code at you.

106 Chapter 1. Contents:

Page 111: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Boo! And there’s the tiny detail of await being a reserved keyword. Let’s continue.

A curious thing about the Curio AWAIT() is that it does nothing if you give it something other than a coroutine. So,you could still use that consumer() function with a normal thread. Just pop into the REPL and try this:

>>> import queue>>> import threading>>> q = queue.Queue()>>> t = threading.Thread(target=consumer, args=(q,))>>> t.start()>>> q.put(1)Got: 1>>> q.put(2)Got: 2>>> q.put(None)Consumer done>>>

Just to be clear about what’s happening here, consumer() is a normal synchronous function. It uses the AWAIT()function on a queue. We just gave it a normal thread queue and launched it into a normal thread at the interactiveprompt. It still works. Curio is not running at all.

Running threads within Curio have some side benefits. If you’re willing to abandon the limitations of the threadingmodule, you’ll find that Curio’s features such as timeouts and cancellation work fine in a thread. For example:

from curio.thread import await, AsyncThreadimport curio

def consumer(q):try:

while True:try:

with curio.timeout_after(0.5):item = AWAIT(q.get())

except curio.TaskTimeout:print('Ho, hum...')continue

print('Got:', item)AWAIT(q.task_done())

except curio.CancelledError:print('Consumer done')raise

async def producer(n, q):while n > 0:

await q.put(n)await curio.sleep(1)n -= 1

print('Producer done')

async def main():q = curio.Queue()

t = AsyncThread(target=consumer, args=(q,))await t.start()await producer(10, q)await q.join()await t.cancel()

1.4. Developing with Curio 107

Page 112: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

if __name__ == '__main__':curio.run(main)

Here the t.cancel() cancels the async-thread. As with normal Curio tasks, the cancellation is reported on block-ing operations involving AWAIT(). The timeout_after() feature also works fine. You don’t use it as an asyn-chronous context manager in a synchronous function, but it has the same overall effect. Don’t try this with a normalthread.

The process of launching an asynchronous thread can be a bit cumbersome. Therefore, there are some conveniencefeatures for making it easier. If you’re working within Curio, you can use spawn_thread() to launch a thread. Itworks in the same general manner as the normal spawn function. For example:

t = await spawn_thread(consumer, q)...await t.join()

The spawn_thread() function can also be used as an asynchronous context manager:

async with spawn_thread():consumer(q)...

If you use this latter form, the entire body of the context manager runs in an asynchronous thread. You are not allowedto use any operation involving async or await. However, you can still use the magic AWAIT() function to delegateasync operations back to Curio.

Finally, there is a special decorator @async_thread that can be used to adapt a synchronous function. For example:

from curio.thread import async_thread, awaitfrom curio import run, tcp_server

@async_threaddef sleeping_dog(client, addr):

with client:for data in client.makefile('rb'):

n = int(data)time.sleep(n)AWAIT(client.sendall(b'Bark!\n'))

print('Connection closed')

run(tcp_server, '', 25000, sleeping_dog)

If you do this, the function becomes a coroutine where any invocation automatically launches it into a thread. This isuseful if you need to write coroutines that perform a lot of blocking operations, but you’d like that coroutine to worktransparently with the rest of Curio. Not to add further magic, functions that use @async_thread can still be usedin normal synchronous code. If called from a non-coroutine, the function executes normally.

All of this discussion is really not presenting asynchronous threads in their full glory. The key idea though is thatinstead of thinking of threads as being this completely separate universe of code that exists outside of Curio, youcan actually create threads that work with Curio. They can use all of Curio’s synchronization primitives and theycan interact with Curio tasks. These threads can use all of Curio’s normal features and they can perform blockingoperations. They can call C extensions that release the GIL. You can have these threads interact with existing libraries.If you’re organized, you can write synchronous functions that work with Curio and with normal threaded code at thesame time. It’s a brave new world.

108 Chapter 1. Contents:

Page 113: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

1.4.6 Programming with Processes

A pitfall of asynchronous I/O is that it does not play nice with CPU-intensive operations. Just as a synchronousblocking operation can stall the kernel, a long-running calculation can do the same. Although calculations can bemoved over to threads, that does not work as well as you might expect. Python’s global interpreter lock (GIL) preventsmore than one thread from executing in parallel. Moreover, CPU intensive operations can starve I/O handling. There’sa lot that can be said about this, but go view Dave’s talk at https://www.youtube.com/watch?v=5jbG7UKT1l4 andthe associated slides at http://www.slideshare.net/dabeaz/in-search-of-the-perfect-global-interpreter-lock. The bottomline: threads are not what’s you’re looking for if CPU-intensive procecessing is your goal.

Curio provides several mechanisms for working with CPU-intensive work. This section will describe some approachesyou might take.

Launching Subprocesses

If CPU intensive work can be neatly packaged up into an independent program or script, you can have curio run itusing the curio.subprocess module. This is an asynchronous implementation of the Python standard librarymodule by the same name. You use it the same way:

from curio.subprocess import check_output, CalledProcessError

async def coro():try:

out = await check_output(['prog', 'arg1', 'arg2'])except CalledProcessError as e:

print('Failed!')

This runs an external command, collects its output, and returns it to you as a string. Curio also provides an asyn-chronous version of the Popen class and the subprocess.run() function. Again, the behavior is meant to mimicthat of the standard library module.

Running CPU intensive functions

If you have a simple function that performs CPU-intensive work, you can try running it using therun_in_process() function. For example:

from curio import run_in_process

def fib(n):if n <= 2:

return 1else:

return fib(n-1) + fib(n-2)

async def coro():r = await run_in_process(fib, 40)

This runs the specified function in a completely separate Python interpreter and returns the result. It is critical toemphasize that this only works if the supplied function is completely isolated. It should not depend on global state orhave any side-effects. Everything the function needs to execute should be passed in as argument.

The run_in_process() function works with all of Curio’s usual features including cancellation and timeouts. Ifcancelled, the subprocess being used to execute the work is sent a SIGTERM signal with the expectation that it willdie immediately.

1.4. Developing with Curio 109

Page 114: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Message Passing and Channels

One issue with run_in_process() is that it doesn’t really give you much control over what’s happening in a childprocess. For example, you don’t have too much control over subtle details such as signal handling, files, networkconnections, cancellation, and other things. Also, if there is any kind of persistent state, it will be difficult to manage.

For more complicated kinds of things, you might want to turn to explicit message passing instead. For this, Curioprovides a Channel object. A channel is kind of like a socket except that it allows picklable Python objects to besent and received. It also provides a bit of authentication. Here is an example of a simple producer program usingchannels:

# producer.pyfrom curio import Channel, run

async def producer(ch):while True:

c = await ch.accept(authkey=b'peekaboo')for i in range(10):

await c.send(i)await c.send(None) # Sentinel

if __name__ == '__main__':ch = Channel(('localhost', 30000))run(producer, ch)

Here is a consumer program:

# consumer.pyfrom curio import Channel, run

async def consumer(ch):c = await ch.connect(authkey=b'peekaboo')while True:

msg = await c.recv()if msg is None:

breakprint('Got:', msg)

if __name__ == '__main__':ch = Channel(('localhost', 30000))run(consumer, ch)

Each of these programs create a corresponding Channel object. One of the programs must act as a server and acceptincoming connections using Channel.accept(). The other program uses Channel.connect() to make aconnection. As an option, an authorization key may be provided. Both methods return a Connection instance thatallows Python objects to be sent and received. Any Python object compatible with pickle is allowed.

Beyond this, how you use a channel is largely up to you. Each program runs independently. The programs could liveon the same machine. They could run on separate machines. The main thing is that they send messages back and forth.

One notable thing about channels is that they are compatible with Python’s multiprocessing module. For exam-ple, you could rewrite the consumer.py program like this:

# consumer.pyfrom multiprocessing.connection import Client

def consumer(address):c = Client(address, authkey=b'peekaboo')while True:

110 Chapter 1. Contents:

Page 115: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

msg = c.recv()if msg is None:

breakprint('Got:', msg)

if __name__ == '__main__':consumer(('localhost', 30000))

This code doesn’t involve Curio in any way. However, it speaks the same messaging protocol. So, it should work justfine.

Spawning Tasks in a Subprocess

As final option, Curio provides a mechanism for spawning tasks in a subprocess. To do this, use the aptly nameaside() function. For example:

from curio import Channel, run, aside

async def producer(ch):c = await ch.accept(authkey=b'peekaboo')for i in range(10):

await c.send(i)

async def consumer(ch):c = await ch.connect(authkey=b'peekaboo')while True:

msg = await c.recv()print('Got:', msg)

async def main():ch = Channel(('localhost', 30000))cons = await aside(consumer, ch) # Launch consumer in separate processawait producer(ch)await cons.cancel() # Cancel consumer process

if __name__ == '__main__':run(main)

aside() does nothing more than launch a new Python subprocess and invoke curio.run() on the supppliedcoroutine. Any additional arguments supplied to aside() are given as arguments to the coroutine.

aside() does not involve a pipe or a process fork. The newly created process shares no state with the caller. There isno I/O channel between processes. There is no shared signal handling. If you want I/O, you should create a Channelobject and pass it as an argument as shown (or use some other communication mechanism such as sockets).

A notable thing about aside() is that it still creates a proper Task in the caller. You can join with that task or cancelit. It will be cancelled on kernel shutdown if you make it daemonic. If you cancel the task, a TaskCancelled ex-ception is propagated to the subprocess (e.g., the consumer() coroutine above gets a proper cancellation exceptionwhen the main() coroutine invokes cons.cancel()).

Tasks launched using aside() do not return a normal result. As noted, aside() does not create a pipe or any kindof I/O channel for communicating a result. If you need a result, it should be communicated via a channel. Should youcall join(), the return value is the exit code of the subprocess. Normally it is 0. A non-zero exit code indicates anerror of some kind.

aside() can be particularly useful if you want to programs that perform sharding or other kinds of distributedcomputing tricks. For example, here is an example of a sharded echo server:

1.4. Developing with Curio 111

Page 116: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

from curio import *import signalimport os

async def echo_client(sock, address):print(os.getpid(), 'Connection from', address)async with sock:

try:while True:

data = await sock.recv(100000)if not data:

breakawait sock.sendall(data)

except CancelledError:await sock.sendall(b'Server is going away\n')raise

async def main(nservers):goodbye = SignalEvent(signal.SIGTERM, signal.SIGINT)for n in range(nservers):

await aside(tcp_server, '', 25000, echo_client, reuse_port=True)await goodbye.wait()print("Goodbye cruel world!")raise SystemExit(0)

if __name__ == '__main__':run(main(10))

In this code, aside() is used to spin up 10 separate processes, each of which is running the Curio tcp_server()coroutine. The reuse_port option is used to make them all bind to the same port. The the main program then waitsfor a termination signal to arrive, followed by a request to exit. That’s it–you now have ten running Python processes inparallel. On exit, every task in every process will be properly cancelled and each connected client will get the “Serveris going away” message. It’s magic.

Let’s step aside for a moment and talk a bit more about some of this magic. When working with subprocesses, it iscommon to spend a lot of time worrying about things like shutdown, signal handling, and other horrors. Yes, thosethings are an issue, but if you use aside() to launch tasks, you should just manage those tasks in the usual Curioway. For example, if you want to explicitly cancel one of them, use its cancel() method. Or if you want to quitaltogether, raise SystemExit as shown. Internally, Curio is tracking the associated subprocesses and will managetheir lifetime appropriately. As long as you let Curio do its thing and you shut things down cleanly, it should all work.

1.4.7 Working with Files

Let’s talk about files for a moment. By files, I mean files on the file system–as in the thousands of things sitting in theDesktop folder on your laptop.

Files present a special problem for asynchronous I/O. Yes, you can use Python’s built-in open() function to opena file and yes you can obtain a low-level integer file descriptor for it. You might even be able to wrap it with aCurio FileStream() instance. However, under the hood, it’s hard to say if it is going to operate in an async-friendly manner. Support for asynchronous file I/O has always been a bit dicey in most operating systems. Often it isnonexistent unless you resort to very specialized APIs such as the POSIX aio_* functions. And even then, it mightnot exist.

The bottom line is that interacting with traditional files might cause Curio to block, leading to various performanceproblems under heavy load. As an example, consider everything that has to happen simply to open a file–for example,traversing through a directory hierarchy, loading file metadata, etc. It could involve disk seeks on a physical device. It

112 Chapter 1. Contents:

Page 117: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

might involve network access. It will undoubtedly introduce a delay in your async code.

If you’re going to write code that operates with traditional files, prefer to use Curio’s aopen() function instead. Forexample:

async def coro():async with aopen('somefile.txt') as f:

data = await f.read() # Get data...

aopen() returns a file-like object where all of the traditional file methods have been replaced by async-compatibleequivalents. The underlying implementation is guaranteed not to block the Curio kernel loop. How this is accom-plished may vary by operating system. At the moment, Curio uses background threads to avoid blocking.

1.4.8 Interacting with Synchronous Code

Asynchronous functions can call functions written in a synchronous manner. For example, calling out to standardlibrary modules. However, this communication is one-way. That is, an asynchronous function can call a synchronousfunction, but the reverse is not true. For example, this fails:

async def spam():print('Asynchronous spam')

def yow():print('Synchronous yow')spam() # Fails (doesn't run)await spam() # Fails (syntax error)run(spam) # Fails (RuntimeError, only one kernel per thread)

async def main():yow() # Works

run(main)

The reason that it doesn’t work is that asynchronous functions require the use of the Curio kernel and once you call asynchronous function, it’s no longer in control of what’s happening.

It’s probably best to think of synchronous code as a whole different universe. If for some reason, you need to makesynchronous code communicate with asynchronous code, you need to devise some sort of different strategy for dealingwith it. Curio provides a few different techniques that synchronous code can use to interact with asynchronous codefrom beyond the abyss.

Synchronous/Asynchronous Queuing

One approach for bridging asynchronous and synchronous code is to use a Queue and to take an approach similar tohow you might communicate between threads. For example, you can write code like this:

from curio import run, spawn, Queue

q = Queue()

async def worker():item = await q.get()print('Got:', item)

def yLazyow():

1.4. Developing with Curio 113

Page 118: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

print('Synchronous yow')q.put('yow') # Works (note: there is no await)print('Goodbye yow')

async def main():await spawn(worker)yow()await sleep(1) # <- worker awakened hereprint('Main goodbye')

run(main)

Running this code produces the following output:

Synchronous yowGoodbye yowGot: yowMain goodbye

Curio queues allow the q.put() method to be used from synchronous code. Thus, if you’re in the synchronousrealm, you can at least queue up a bunch of data. It won’t be processed until you return to the world of Curio tasksthough. So, in the above code, you won’t see the item actually delivered until some kind of blocking operation ismade.

Lazy Coroutine Evalulation

Another approach is to exploit the “lazy” nature of coroutines. Coroutines don’t actually execute until they are awaited.Thus, synchronous functions could potentially defer asynchronous operations until execution returns back to the worldof async. For example, you could do this:

async def spam():print('Asynchronous spam')

def yow(deferred):print('Synchronous yow')deferred.append(spam()) # Creates a coroutine, but doesn't execute itprint('Goodbye yow')

async def main():deferred = []yow(deferred)for coro in deferred:

await coro # spam() runs here

run(main)

If you run the above code, the output will look like this:

Synchronous yowGoodbye yowAsynchronous spam

Notice how the asynchronous operation has been deferred until control returns back to the main() coroutine and thecoroutine is properly awaited.

114 Chapter 1. Contents:

Page 119: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

Executing Coroutines on Behalf of Synchronous Code

If you are in synchronous code and need to execute a coroutine, you can certainly use the Curio run() function todo it. However, if you’re going to execute a large number of coroutines, you’ll be better served by creating a Kernelobject and repeatedly using its run() method instead:

from curio import Kernel

async def coro(n):print('Hello coro', n)

def main():with Kernel() as kern:

for n in range(10):kern.run(coro, n)

main()

Kernels involve a fair-bit of internal state related to their operation. Taking this approach will be a lot more efficient.Also, if you happen to launch any background tasks, those tasks will persist and continue to execute when you makesubsequent run() calls.

Interfacing with Foreign Event Loops

Sometimes, you may want to interface Curio-based applications with a foreign event loop. This is a scenario that oftenarises when using a graphical user interface.

For this scenario, there are a few possible options. One choice is to try and run everything in a single thread. Forthis, you might need to inject callouts to run the foreign event loop on a periodic timer. This is a somewhat involvedexample, but here is some code that integrates a Curio echo server with a tiny Tk-based GUI:

import tkinter as tkfrom curio import *

class EchoApp(object):def __init__(self):

# Pending coroutinesself.pending = []

# Main Tk windowself.root = tk.Tk()

# Number of clients connected labelself.clients_label = tk.Label(text='')self.clients_label.pack()self.nclients = 0self.incr_clients(0)self.client_tasks = set()

# Number of bytes received labelself.bytes_received = 0self.bytes_label = tk.Label(text='')self.bytes_label.pack()self.update_bytes()

# Disconnect all buttonself.disconnect_button = tk.Button(text='Disconnect all',

1.4. Developing with Curio 115

Page 120: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

command=lambda: self.pending.append(self.→˓disconnect_all()))

self.disconnect_button.pack()

def incr_clients(self, delta=1):self.nclients += deltaself.clients_label.configure(text='Number Clients %d' % self.nclients)

def update_bytes(self):self.bytes_label.configure(text='Bytes received %d' % self.bytes_received)self.root.after(1000, self.update_bytes)

async def echo_client(self, sock, address):self.incr_clients(1)self.client_tasks.add(await current_task())try:

async with sock:while True:

data = await sock.recv(100000)if not data:

breakself.bytes_received += len(data)await sock.sendall(data)

finally:self.incr_clients(-1)self.client_tasks.remove(await current_task())

async def disconnect_all(self):for task in list(self.client_tasks):

await task.cancel()

async def main(self):serv = await spawn(tcp_server, '', 25000, self.echo_client)while True:

self.root.update()for coro in self.pending:

await coroself.pending = []await sleep(0.05)

if __name__ == '__main__':app = EchoApp()run(app.main)

If you run this program, it will put up a small GUI window that looks like this:

The GUI has two labels. One of the labels shows the number of connected clients. It is updated by theincr_clients() method. The other label shows a byte total. It is updated on a periodic timer. The buttonDisconnect All disconnects all of the connected clients by cancelling them.

Now, a few tricky aspects of this code. First, control is managed by the main() coroutine which runs under Curio.The various server tasks are spawned separately and the program enters a periodic update loop in which the GUI is

116 Chapter 1. Contents:

Page 121: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

updated on timer every 50 milliseconds. Since everything runs in the same thread, it’s okay for coroutines to performoperations that might update the display (e.g., directly calling incr_clients()).

Executing coroutines is a bit tricky though. Since the GUI runs outside of Curio, it’s not able to run coroutines directly.Thus, if it’s going to invoke a coroutine in response to an event such as a button press, it has to handle it a bit differently.In this case, the coroutine is placed onto a list (self.pending) and processed in the main() loop after pendingGUI events have been updated. It looks a bit weird, but it basically works.

One issue with this approach is that might result in a sluggish or glitchy GUI. Yes, events are processed on a periodicinterval, but if there’s a lot action going on in the GUI, it might just feel “off” in some way. Dealing with that in asingle thread is going to be tricky. You could invert the control flow by having the GUI call out to Curio on a periodictimer. However, that’s just going to change the problem into one of glitchy network performance.

Another possibility is to run the GUI and Curio in separate threads and to have them communicate via queues. Here’ssome code that does that:

import tkinter as tkfrom curio import *import threading

class EchoApp(object):def __init__(self):

self.gui_ops = UniversalQueue(withfd=True)self.coro_ops = UniversalQueue()

# Main Tk windowself.root = tk.Tk()

# Number of clients connected labelself.clients_label = tk.Label(text='')self.clients_label.pack()self.nclients = 0self.incr_clients(0)self.client_tasks = set()

# Number of bytes received labelself.bytes_received = 0self.bytes_label = tk.Label(text='')self.bytes_label.pack()self.update_bytes()

# Disconnect all buttonself.disconnect_button = tk.Button(text='Disconnect all',

command=lambda: self.coro_ops.put(self.→˓disconnect_all()))

self.disconnect_button.pack()

# Set up event handler for queued GUI updatesself.root.createfilehandler(self.gui_ops, tk.READABLE, self.process_gui_ops)

def incr_clients(self, delta=1):self.nclients += deltaself.clients_label.configure(text='Number Clients %d' % self.nclients)

def update_bytes(self):self.bytes_label.configure(text='Bytes received %d' % self.bytes_received)self.root.after(1000, self.update_bytes)

def process_gui_ops(self, file, mask):

1.4. Developing with Curio 117

Page 122: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

while not self.gui_ops.empty():func, args = self.gui_ops.get()func(*args)

async def echo_client(self, sock, address):await self.gui_ops.put((self.incr_clients, (1,)))self.client_tasks.add(await current_task())try:

async with sock:while True:

data = await sock.recv(100000)if not data:

breakself.bytes_received += len(data)await sock.sendall(data)

finally:self.client_tasks.remove(await current_task())await self.gui_ops.put((self.incr_clients, (-1,)))

async def disconnect_all(self):for task in list(self.client_tasks):

await task.cancel()

async def main(self):serv = await spawn(tcp_server, '', 25000, self.echo_client)while True:

coro = await self.coro_ops.get()await coro

def run_forever(self):threading.Thread(target=run, args=(self.main(),)).start()self.root.mainloop()

if __name__ == '__main__':app = EchoApp()app.run_forever()

In this code, there are two queues in use. The gui_ops queue is used to carry out updates on the GUI from Curio.The echo_client() coroutine puts various operations on this queue. The GUI listens to the queue by watching forI/O events. This is a bit sneaky, but Curio’s UniversalQueue has an option that delivers a byte on an I/O channelwhenever an item is added to the queue. This, in turn, can be used to wake an external event loop. In this code, thecreatefilehandler() call at the end of the __init__() sets this up.

The coro_ops queue goes in the other direction. Whenever the GUI wants to execute a coroutine, it places it on thisqueue. The main() coroutine receives and awaits these coroutines.

1.4.9 Programming Considerations and APIs

The use of async and await present new challenges in designing libraries and APIs. For example, asynchronousfunctions can’t be called outside of coroutines and weird things happen if you forget to use await. Curio can’t solveall of these problems, but it does provide some metaprogramming features that might prove to be interesting. Many ofthese features are probably best described as “experimental” so use them with a certain skepticism and caution.

118 Chapter 1. Contents:

Page 123: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

A Moment of Zen

One of the most commonly cited rules of Python coding is that “explicit is better than implicit.” Use of async andawait embodies this idea–if you’re using a coroutine, it is always called using await. There is no ambiguity whenreading the code. Moreover, await is only allowed inside functions defined using async def. So, if you seeasync or await, you’re working with coroutines–end of story.

That said, there are still certain design challenges. For example, where are you actually allowed to define coroutines?Functions? Methods? Special methods? Properties? Also, what happens when you start to mix normal functions andcoroutines together? For example, suppose you have a class with a mix of methods like this:

class Spam(object):async def foo():

...def bar():

...

Is this mix of a coroutine and non-coroutine methods in the class a potential source of confusion to users? It might behard to say. However, what happens if more advanced features such as inheritance enter the picture and people screwit up? For example:

class Child(Spam):def foo(): # Was a coroutine in Spam

...

Needless to say, this is the kind of thing that might keep you up at night. If you are writing any kind of large applicationinvolving async and await you’ll probably want to spend some time carefully thinking about the big picture andhow all of the parts hold together.

Asynchronous Abstract Base Classes

Suppose you wanted to enforce async-correctness in methods defined in a subclass. Use AsyncABC as a base class.For example:

from curio.meta import AsyncABC

class Base(AsyncABC):async def spam(self):

pass

If you inherit from Base and don’t define spam() as an asynchronous method, you’ll get an error:

class Child(Base):def spam(self):

pass

Traceback (most recent call last):...TypeError: Must use async def spam(self)

The AsyncABC class is also a proper abstract base class so you can use the usual @abstractmethod decorator onmethods as well. For example:

from curio.meta import AsyncABC, abstractmethod

class Base(AsyncABC):

1.4. Developing with Curio 119

Page 124: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

@abstractmethodasync def spam(self):

pass

Asynchronous Instance Creation

Normally, use of async and await is forbidden in the __init__() method of a class. Honestly, you shouldprobably try to avoid asynchronous operations during instance creation, but if you can’t, there are two approaches.First, you can define an asynchronous class method:

class Spam(object):@classmethodasync def new(cls)

self = cls.__new__(cls)self.val = await coro()...return self

# Example of creating an instanceasync def main():

s = await Spam.new()

You’d need to custom-tailor the arguments to new() to your liking. However, as an async function, you’re free touse coroutines inside.

A second approach is to inherit from the Curio AsyncObject base class like this:

from curio.meta import AsyncObjectclass Spam(AsyncObject):

async def __init__(self):self.val = await coro()...

# Example of creating an instanceasync def main():

s = await Spam()

This latter approach probably looks the most “pythonic” at the risk of shattering your co-workers heads as they wonderwhat kind of voodoo-magic you applied to the Spam class to make it support an asynchronous __init__() method.If you must know, that magic involves metaclasses. On that subject, the AsyncObject base uses the same metaclassas AsyncABC, enforces async-correctness in subclasses, and allows abstract methods to be defined.

Asynchronous Instance Cleanup/Deletion

You might be asking yourself if it’s possible to put asynchronous operations in the __del__() method of a class. Inshort: it’s not possible (at least not using any technique that I’m aware of). If you need to perform actions involvingasynchronous operations on cleanup, you should make your class operate as an asynchronous context manager:

class Spam(object):async def __aenter__(self):

await coro()...

async def __aexit__(self, ty, val, tb):

120 Chapter 1. Contents:

Page 125: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

await coro()...

Then, use your object using an async with statement like this:

async def main():s = Spam()...async with s:

...

If a context-manager is not appropriate, then your only other option is to have an explicit shutdown/cleanup methoddefined as an async function:

class Spam(object):async def cleanup(self):

await coro()...

async def main():s = Spam()try:

...finally:

await s.cleanup()

Asynchronous Properties

It might come as a surprise, but normal Python properties can be defined using asynchronous functions. For example:

class Spam(object):@propertyasync def value(self):

result = await coro()return result

# Example usageasync def main():

s = Spam()...v = await s.value

The property works as a read-only value as long as you preface any access by an await. Again, you might shatterheads pulling a stunt like this.

It does not seem possible to define asynchronous property setter or deleter functions. So, if you’re going to dropasync on a property, keep in mind that it best needs to be read-only.

Blocking Functions

Suppose you have a normal Python function that performs blocking operations, but you’d like the function to be safelyavailable to coroutines. You can use the curio blocking decorator to do this:

from curio.meta import blocking

1.4. Developing with Curio 121

Page 126: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

@blockingdef spam():

...blocking_op()...

The interesting thing about @blocking is that it doesn’t change the usage of the function for normal Python code.You call it the same way you always have:

def foo():s = spam()

In asynchronous code, you call the same function but add await like this:

async def bar():s = await spam()

Behind the scenes, the blocking function is implicitly executed in a separate thread using Curio’srun_in_thread() function.

CPU-Intensive Functions

CPU-intensive operations performed by a coroutine will temporarily suspend execution of all other tasks. If you havesuch a function, you can mark it as such using the @cpubound decorator. For example:

from curio.meta import cpubound

@cpubounddef spam():

# Computationally expensive op...return result

In normal Python code, you call this function the same way as before:

def foo():s = spam()

In asynchronous code, you call the same function but add await like this:

async def bar():s = await spam()

This will run the computationally intensive task in a separate process using Curio’s run_in_process() function.

Be aware that @cpubound makes a function execute in a separate Python interpreter process. It’s only going to workcorrectly if that function is free of side-effects and dependencies on global state.

Dual Synchronous/Asynchronous Function Implementation

Suppose you wanted to have a function with both a synchronous and asynchronous implementation. You can use the@awaitable decorator for this:

from curio.meta import awaitable

122 Chapter 1. Contents:

Page 127: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

def spam():print('Synchronous spam')

@awaitable(spam)async def spam():

print('Asynchronous spam')

The selection of the appropriate method now depends on execution context. Here’s an example of what happens inyour code:

def foo():spam() # --> Synchronous spam

async def bar():await spam() # --> Asynchronous spam

If you’re wondering how in the world this actually works, let’s just say it involves frame hacks. Your list of enemiesand the difficulty of your next code review continues to grow.

Considerations for Function Wrapping and Inheritance

Suppose that you have a simple async function like this:

async def spam():print('spam')

Now, suppose you have another function that wraps around it:

async def bar():print('bar')return await spam()

If you call bar() as a coroutine, it will work perfectly fine. For example:

async def main():await bar()

However, here’s a subtle oddity. It turns out that you could drop the async and await from the bar() functionentirely and everything will still work. For example:

def bar():print('bar')return spam()

However, should you actually do this? All things considered, I think it’s probably better to leave the async andawait keywords in place. It makes it more clear to the reader that the code exists in the world of asynchronousprogramming. This is something to think about as you write larger applications–if you’re using async, always defineasync functions.

Here is another odd example involving inheritance. Suppose you redefined a method and used super() like this:

class Parent(object):async def spam(self):

print('Parent.spam')

class Child(Parent):

1.4. Developing with Curio 123

Page 128: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

def spam(self):print('Child.spam')return super().spam()

It turns out that the spam() method of Child will work perfectly fine, but it’s just a little weird that it doesn’t useasync in the same way as the parent. It would probably read better written like this:

class Child(Parent):async def spam(self):

print('Child.spam')return await super().spam()

124 Chapter 1. Contents:

Page 129: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 2

Installation:

Curio requires Python 3.6 and Unix. You can install it using pip:

bash % python3 -m pip install curio

For best results, however, you’ll want to grab the version on Github at https://github.com/dabeaz/curio.

125

Page 130: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

126 Chapter 2. Installation:

Page 131: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 3

An Example

Here is a simple TCP echo server implemented using sockets and curio:

# echoserv.py

from curio import run, spawnfrom curio.socket import *

async def echo_server(address):sock = socket(AF_INET, SOCK_STREAM)sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)sock.bind(address)sock.listen(5)print('Server listening at', address)async with sock:

while True:client, addr = await sock.accept()await spawn(echo_client, client, addr)

async def echo_client(client, addr):print('Connection from', addr)async with client:

while True:data = await client.recv(100000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(echo_server, ('',25000))

If you have programmed with threads, you’ll find that curio looks similar. You’ll also find that the above server canhandle thousands of simultaneous client connections even though no threads are being used under the covers.

Of course, if you prefer something a little higher level, you can have curio take of the fiddly bits related to setting up

127

Page 132: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

the server portion of the code:

# echoserv.py

from curio import run, tcp_server

async def echo_client(client, addr):print('Connection from', addr)while True:

data = await client.recv(100000)if not data:

breakawait client.sendall(data)

print('Connection closed')

if __name__ == '__main__':run(tcp_server, '', 25000, echo_client)

This is only a small sample of what’s possible. The tutorial is a good starting point for more information. The howtohas specific recipes for solving different kinds of problems.

128 Chapter 3. An Example

Page 133: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 4

Additional Features

Curio provides additional support for SSL connections, synchronization primitives (events, locks, semaphores, andcondition variables), queues, Unix signals, subprocesses, as well as running tasks in threads and processes. The taskmodel fully supports cancellation, timeouts, monitoring, and other features critical to writing reliable code.

129

Page 134: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

130 Chapter 4. Additional Features

Page 135: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 5

The Big Question: Why?

Python already has a variety of libraries for async and event driven I/O. So, why create yet another library? There isno simple answer to that question, but here are a few of the motivations for creating curio.

• Python 3 has evolved considerably as a programming language and has adopted many new language featuresthat are well-suited to cleanly writing a library like this. For example, improved support for non-blocking I/O,support for delegation to subgenerators (yield from) and the introduction of explicit async and await syntaxin Python 3.5. Curio takes full advantage of these features and is not encumbered by issues of backwardscompatibility with legacy Python code written 15 years ago.

• Existing I/O libraries are mainly built on event-loops, callback functions, and abstractions that predate Python’sproper support for coroutines. As a result, they are either overly complicated or dependent on esoteric magicinvolving C extensions, monkeypatching, or reimplementing half of the TCP flow-control protocol. Curio is aground-up implementation that takes a different approach to the problem while relying upon known program-ming techniques involving sockets and files. If you have previously written synchronous code using processesor threads, curio will feel familiar. That is by design.

• Simplicity is an important part of writing reliable systems software. When your code fails, it helps to be able todebug it–possibly down to the level of individual calls to the operating system if necessary. Simplicity mattersa lot. Simple code also tends to run faster. The implementation of Curio aims to be simple. The API for usingCurio aims to be intuitive.

• It’s fun.

131

Page 136: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

132 Chapter 5. The Big Question: Why?

Page 137: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 6

Under the Covers

Internally, curio is implemented entirely as a task queuing system– much in the same model of a microkernel basedoperating system. Tasks are represented by coroutine functions declared with the async keyword. Each yield of acoroutine results in a low-level kernel “trap” or system call. The kernel handles each trap by moving the current taskto an appropriate waiting queue. Events (i.e., due to I/O) and other operations make the tasks move from waitingqueues back into service.

It’s important to emphasize that the underlying kernel is solely focused on task queuing and scheduling. In fact, thekernel doesn’t even perform any I/O operations or do much of anything. This means that it is very small and fast.

Higher-level I/O operations are carried out by a wrapper layer that uses Python’s normal socket and file objects. Youuse the same operations that you would normally use in synchronous code except that you add await keywords tomethods that might block.

133

Page 138: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

134 Chapter 6. Under the Covers

Page 139: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 7

Questions and Answers

Q: Is curio implemented using the asyncio module?

A: No. Curio is a standalone library. Although the core of the library uses the same basic machinery as asyncio topoll for I/O events, the handling of those events is carried out in a completely different manner.

Q: Is curio meant to be a clone of asyncio?

A: No. Although curio provides a significant amount of overlapping functionality, the API is different and smaller.Compatibility with other libaries is not a goal.

Q: Is there any kind of overarching design philosophy?

A: Yes and no. The “big picture” design of Curio is mainly inspired by the kernel/user space distinction foundin operating systems only it’s more of a separation into “synchronous” and “asynchronous” runtime environments.Beyond that, Curio tends to take rather pragmatic view towards concurrent programming techniques more generally.It’s probably best to view Curio as providing a base set of primitives upon which you can build all sorts of interestingthings. However, it’s not going to dictate much in the way of religious rules on how you structure it.

Q: How many tasks can be created?

A: Each task involves an instance of a Task class that encapsulates a generator. No threads are used. As such, you’rereally only limited by the memory of your machine–potentially you could have hundreds of thousands of tasks. TheI/O functionality in curio is implemented using the built-in selectors module. Thus, the number of open socketsallowed would be subject to the limits of that library combined with any per-user limits imposed by the operatingsystem.

Q: Can curio interoperate with other event loops?

A: It depends on what you mean by the word “interoperate.” Curio’s preferred mechanism of communication with theexternal world is a queue. It is possible to communicate between Curio, threads, and other event loops using queues.Curio can also submit work to the asyncio event loop with the provision that it must be running separately in adifferent thread.

Q: How fast is curio?

A: In rough benchmarking of the simple echo server shown here, Curio runs about 90% faster than comparable codeusing coroutines in asyncio and about 50% faster than similar code written using Trio. This was last measured onLinux using Python 3.7b3. Keep in mind there is a lot more to overall application performance than the performance

135

Page 140: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

of a simple echo server so your mileage might vary. See the examples/benchmark dirctory for various testingprograms.

Q: Is curio going to evolve into a framework?

A: No, because evolving into a framework would mean modifying Curio to actually do something. If it actually didsomething, then people would start using it to do things. And then all of those things would have to be documented,tested, and supported. People would start complaining about how all the things related to the various built-in thingsshould have new things added to do some crazy thing. No forget that, Curio remains committed to not doing much ofanything the best it can. This includes not implementing HTTP.

Q: What are future plans?

A: Future work on curio will primarily focus on features related to performance, debugging, diagnostics, and reliability.A main goal is to provide a robust environment for running and controlling concurrent tasks. However, it’s alsosupposed to be fun. A lot of time is being spent thinking about the API and how to make it pleasant.

Q: Is there a Curio sticker?

A: No. However, you can make a stencil

Q: How big is curio?

A: The complete library currently consists of about 3200 statements as reported in coverage tests.

Q: I see various warnings about not using Curio. What should I do?

A: Has programming taught you nothing? Warnings are meant to be ignored. Of course you should use Curio.However, be aware that the main reason you shouldn’t be using Curio is that you should be using it.

Q: Can I contribute?

A: Absolutely. Please use the Github page at https://github.com/dabeaz/curio as the primary point of discussion con-cerning pull requests, bugs, and feature requests.

136 Chapter 7. Questions and Answers

Page 141: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

CHAPTER 8

About

Curio was created by David Beazley (@dabeaz). http://www.dabeaz.com

It is a young project. All contributions welcome.

137

Page 142: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

138 Chapter 8. About

Page 143: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

Python Module Index

ccurio, 63curio.activation, 60curio.channel, 44curio.file, 46curio.io, 39curio.meta, 61curio.socket, 42curio.ssl, 42curio.subprocess, 46curio.traps, 64curio.workers, 38

139

Page 144: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

140 Python Module Index

Page 145: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

Index

Symbols_cancel_task() (in module curio.traps), 64_future_wait() (in module curio.traps), 64_get_current() (in module curio.traps), 64_get_kernel() (in module curio.traps), 64_read_wait() (in module curio.traps), 64_scheduler_wait() (in module curio.traps), 64_scheduler_wake() (in module curio.traps), 64_set_timeout() (in module curio.traps), 64_unset_timeout() (in module curio.traps), 64_write_wait() (in module curio.traps), 64

Aabide() (built-in function), 54accept() (curio.channel.Channel method), 44accept() (curio.io.Socket method), 40acquire() (Condition method), 50acquire() (Lock method), 49acquire() (RLock method), 49acquire() (Semaphore method), 50activate() (in module curio.activation), 60Activation (class in curio.activation), 60add_task() (TaskGroup method), 34aopen() (in module curio.file), 47as_stream() (curio.io.Socket method), 40async_thread() (built-in function), 57AsyncABC (class in curio.meta), 61AsyncFile (class in curio.file), 47AsyncObject (class in curio.meta), 62AsyncThread (built-in class), 56authenticate_client() (in module curio.channel), 45authenticate_server() (in module curio.channel), 45AWAIT() (built-in function), 56awaitable() (in module curio.meta), 63

Bbind() (curio.channel.Channel method), 44block_in_thread() (in module curio.workers), 39blocking() (curio.io.FileStream method), 41

blocking() (curio.io.Socket method), 40blocking() (curio.io.SocketStream method), 42blocking() (in module curio.meta), 62bound (BoundedSemaphore attribute), 50BoundedSemaphore (built-in class), 50

Ccancel(), 56cancel() (Task method), 33cancel_remaining() (TaskGroup method), 34cancelled (Task attribute), 33CancelledError, 63Channel (class in curio.channel), 44check_cancellation() (built-in function), 38check_output() (in module curio.subprocess), 46clear() (Event method), 48clock() (built-in function), 36close() (curio.channel.Channel method), 44close() (curio.file.AsyncFile method), 47close() (curio.io.FileStream method), 41close() (curio.io.Socket method), 40close() (in module curio.channel), 45communicate() (curio.subprocess.Popen method), 46completed (TaskGroup attribute), 34Condition (built-in class), 50connect() (curio.channel.Channel method), 44connect() (curio.io.Socket method), 40connect_ex() (curio.io.Socket method), 40Connection (class in curio.channel), 44coro (Task attribute), 33cpubound() (in module curio.meta), 62create_connection() (in module curio.socket), 42create_default_context() (in module curio.ssl), 43created() (in module curio.activation), 61curio (module), 63curio.activation (module), 60curio.channel (module), 44curio.file (module), 46curio.io (module), 39curio.meta (module), 61

141

Page 146: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

curio.socket (module), 42curio.ssl (module), 42curio.subprocess (module), 46curio.traps (module), 64curio.workers (module), 38CurioError, 63current_task() (built-in function), 32cycles (Task attribute), 33

Ddaemon (Task attribute), 33disable_cancellation() (built-in function), 37do_handshake() (in module curio.io), 40

Eempty() (Queue method), 51enable_cancellation() (built-in function), 37Event (built-in class), 48exception (Task attribute), 33

FFileStream (class in curio.io), 41flush() (curio.file.AsyncFile method), 47flush() (curio.io.FileStream method), 41fromfd() (in module curio.socket), 42full() (Queue method), 51

Gget() (Queue method), 51get_server_certificate() (in module curio.ssl), 43getaddrinfo() (in module curio.socket), 42getfqdn() (in module curio.socket), 42gethostbyaddr() (in module curio.socket), 42gethostbyname() (in module curio.socket), 42gethostbyname_ex() (in module curio.socket), 42gethostname() (in module curio.socket), 42getnameinfo() (in module curio.socket), 42

Iid (Task attribute), 33ignore_after() (built-in function), 36ignore_at() (built-in function), 37is_set() (Event method), 48

Jjoin(), 56join() (Queue method), 51join() (Task method), 33join() (TaskGroup method), 34

LLifoQueue (built-in class), 52Lock (built-in class), 48

locked() (Condition method), 50locked() (Lock method), 49locked() (RLock method), 49locked() (Semaphore method), 50

Mmakefile() (curio.io.Socket method), 40MAX_WORKER_PROCESSES (in module cu-

rio.workers), 39MAX_WORKER_THREADS (in module cu-

rio.workers), 39

Nnext_done() (TaskGroup method), 34next_result() (TaskGroup method), 34notify(), 51notify_all(), 51

Oopen_connection() (in module curio), 43open_unix_connection() (in module curio), 43

PPopen (class in curio.subprocess), 46PriorityQueue (built-in class), 52put() (Queue method), 51

Qqsize() (Queue method), 51Queue (built-in class), 51

Rread() (curio.file.AsyncFile method), 47read() (curio.io.FileStream method), 41read1() (curio.file.AsyncFile method), 47readall() (curio.io.FileStream method), 41readinto() (curio.file.AsyncFile method), 47readinto1() (curio.file.AsyncFile method), 47readline() (curio.file.AsyncFile method), 47readline() (curio.io.FileStream method), 41readlines() (curio.file.AsyncFile method), 47readlines() (curio.io.FileStream method), 41recv() (curio.io.Socket method), 39recv() (in module curio.channel), 45recv_bytes() (in module curio.channel), 45recv_into() (curio.io.Socket method), 39recvfrom() (curio.io.Socket method), 40recvfrom_into() (curio.io.Socket method), 40recvmsg() (curio.io.Socket method), 40recvmsg_into() (curio.io.Socket method), 40release() (Condition method), 50release() (Lock method), 49release() (RLock method), 49

142 Index

Page 147: curio Documentation - Read the Docs Documentation, Release 0.8 •a small and unusual object that is considered interesting or attractive •A Python library for concurrent I/O and

curio Documentation, Release 0.9

release() (Semaphore method), 50result (AsyncThread attribute), 56result (Task attribute), 33RLock (built-in class), 49run() (in module curio.subprocess), 46run() (Kernel method), 32run_in_executor() (in module curio.workers), 39run_in_process() (in module curio.workers), 38run_in_thread() (in module curio.workers), 38run_server() (in module curio), 44running() (in module curio.activation), 61

Sseek() (curio.file.AsyncFile method), 47Semaphore (built-in class), 49send() (curio.io.Socket method), 40send() (in module curio.channel), 45send_bytes() (in module curio.channel), 45sendall() (curio.io.Socket method), 40sendmsg() (curio.io.Socket method), 40sendto() (curio.io.Socket method), 40set() (Event method), 48SignalEvent (built-in class), 60SignalQueue (built-in class), 60sleep() (built-in function), 36Socket (class in curio.io), 39socket() (in module curio.socket), 42socketpair() (in module curio.socket), 42SocketStream (class in curio.io), 42spawn() (built-in function), 32spawn() (TaskGroup method), 34spawn_thread() (built-in function), 56SSLContext (class in curio.ssl), 43start() (AsyncThread method), 56state (Task attribute), 33suspended() (in module curio.activation), 61

TTask (built-in class), 32task_done() (Queue method), 52TaskCancelled, 64TaskError, 64TaskGroup (built-in class), 34TaskTimeout, 64tcp_server() (in module curio), 43tcp_server_socket() (in module curio), 44tell() (curio.file.AsyncFile method), 47terminated (Task attribute), 33terminated() (in module curio.activation), 61timeout_after() (built-in function), 36timeout_at() (built-in function), 36TimeoutCancellationError, 64traceback() (Task method), 33truncate() (curio.file.AsyncFile method), 47

Uunix_server() (in module curio), 44unix_server_socket() (in module curio), 44

Vvalue (Semaphore attribute), 50

Wwait(), 56wait() (Condition method), 50wait() (curio.subprocess.Popen method), 46wait() (Event method), 48wait() (Task method), 33wait_for() (Condition method), 50wake_at() (built-in function), 36where() (Task method), 33wrap_socket() (in module curio.ssl), 43write() (curio.file.AsyncFile method), 47write() (curio.io.FileStream method), 41writelines() (curio.file.AsyncFile method), 47writelines() (curio.io.FileStream method), 41

Index 143