Tornado TCP Server tutorial (Part I)

TL;DR
From this post you’ll learn how to implement asynchronous echo TCP Server & Client using Tornado framework. All code is on GitHub.

Roadmap

  1. It’s the first post in the series and is devoted to building simple echo TCP Server in Tornado to show implementation details and coroutine usage.
  2. In the next post we’ll take a closer look at how to connect Tornado to Redis PUB/SUB mechanism in order to deliver data updates in real time.
  3. The final part will cover the implementation of the simple protocol that can control client-server interaction.

Background

As the Real Time Web is fully in its power nowadays more and more applications need improved sophisticated interaction mechanisms between the server and clients. Notifications, live chats, streamline data are common examples of so-called “real time” interactions where the server is not only responsible for receiving requests from clients and serving them but generates its own activity towards the clients.

All of these features are possible thanks to Push technology. One of the more popular ways of implementing it nowadays is WebSockets protocol. It’s very similar to HTTP and provides bidirectional connectivity between server and client.

As described in the docs,

The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request.

So, after all, it’s just a special kind of TCP connection with certain agreements on how server and client behave.

The solution I’m going to describe here is a little bit different. I suggest using raw TCP protocol so we can play around the lower level stuff and also learn how to use Push connectivity in the environments where WebSockets aren’t available. It also might be a tiny bit faster than using WebSocket, although I’d not account on that since experiments show that WebSockets are damn optimized.

Echo TCP Server

So let’s kick this off by implementing a simple TCP echo server that just listens to the connections on a certain port and responds back with whatever data it receives.

We will use Tornado asynchronous web framework for this task, that is proved to be extremely robust and able to handle tens of thousands of concurrent connections. An alternative solution would be standard Python socketserver framework.

The code here is only for Python 3. If you want to run it on Python 2, you need to make small tweaks for coroutines return points. Refer here for more details.

Here’s a quick look at Tornado server:

class Server(TCPServer):
message_separator = b'\r\n'
@gen.coroutine
def handle_stream(self, stream, address):
while True:
try:
request = yield stream.read_until(self.message_separator)
except StreamClosedError:
stream.close(exc_info=True)
return
try:
yield stream.write(request)
except StreamClosedError:
stream.close(exc_info=True)
return
view raw server.py hosted with ❤ by GitHub

We subclass tornado.tcpserver.TCPServer here. The only function that needs to be overridden in order to make it work is handle_stream. It takes stream representing client socket connection wrapped in special Tornado object, as well as client address, which is a simple tuple of the form ('127.0.0.1', 55196). The body of the function consists of an infinite loop that reads data from the socket with stream.read_until and responds with the same data back to the client by issuing stream.write call. Sudden shutdown of the socket is by no means an unexpected situation with all the possible ways the connection might be cut (routing error, client closing the application, timeout, etc.). That’s why we clean up the socket resource on our side and just return from the function in such case.

The tricky part here is that handle_stream is not a typical function. It’s something called a coroutine, a special type of function used in asynchronous programming. You can see those yield calls on lines 14 and 19. They look like return points in generator expressions, but in fact, these are the places where execution of current function breaks and some other coroutine might work until the requested value is returned. Then, execution continues, like nothing happened.

The topic of coroutines is very broad and is definitely out of the scope of this post, so check out Tornado docs and Python asyncio docs for the detailed explanation. For now, you can just assume these to be regular function calls.

The server launch is also something worth mentioning:

if __name__ == '__main__':
Server().listen(5567)
print('Starting the server...')
IOLoop.instance().start()
print('Server has shut down.')
view raw server.py hosted with ❤ by GitHub

The server starts working when the IOLoop instance spins up and begins to process submitted coroutines. After IOLoop has exited the whole program finishes.

The client code is simpler:

class Client(TCPClient):
msg_separator = b'\r\n'
@gen.coroutine
def run(self, host, port):
stream = yield self.connect(host, port)
while True:
data = input(">> ").encode('utf8')
data += self.msg_separator
if not data:
break
else:
yield stream.write(data)
data = yield stream.read_until(self.msg_separator)
body = data.rstrip(self.msg_separator)
print(body)
view raw client.py hosted with ❤ by GitHub

Here we asynchronously connect to the specified socket (host/port pair) and start an infinite loop of waiting for data input from the console, writing this data to the socket and reading back the server response.

Finally, client launch code is similar to server’s:

if __name__ == '__main__':
Client().run('localhost', 5567)
print('Connecting to server socket...')
IOLoop.instance().start()
print('Socket has been closed.')
view raw client.py hosted with ❤ by GitHub

Note how in both server and client we use msg_separator = b'\r\n' to end the messages with. The reason for that is that TCP transports raw bytes of data over the wire and there is no way to see where is the end of the first message and where is the beginning of the next one. Ending messages with a separator is one way of implementing communication protocol over TCP sockets. Here is a nice overview of most common approaches, including this one.

Now we can launch everything and see how it works.

Start the server:

$ python real_time/server.py
Starting the server...

Start the client:

$ python real_time/client.py
Connecting to server socket...
>> echo me
b'echo me'

You can launch as many clients in separate terminals as you wish and see that everyone receives their own messages back.

Conclusion

Now, the code here isn’t particularly useful on its own but we’ll go from here and tweak it so we can get some cool functionality.

As mentioned in the beginning, the code for this Echo Server/Client is on GitHub, clone, fork, try running and play with it yourself. That’s all for now. Don’t brawl, stay calm.

References:

  1. Push technologies:
    1. wikipedia.org: Push technology
    2. stackoverflow.com: What are Long-Polling, Websockets, Server-Sent Events (SSE) and Comet?
    3. medium.com: Python and Server-sent Event
    4. fullstackpython.com: WebSockets
    5. ietf.org: WebSockets RFC
  2. Python TCP:
    1. docs.python.org: socket.py
    2. docs.python.org: socketserver.py
    3. tornadoweb.org: Tornado framework
    4. pymotw.com: TCP/IP Client and Server
    5. binarytides.com: Code a simple socket server in Python
    6. curiousefficiency.org: TCP echo client and server in Python 3.5
    7. gist.github.com: Simple example of creating a socket server with Tornado

Serge Mosin

https://www.databrawl.com/author/svmosingmail-com/

Pythonista, Data/Code lover, Apline skier and gym hitter.