A coworker pointed me to this Nginx module today. You can write a chat server without actually writing a server. The message thread below indicates incredible performance. If you’ve got more than 50K users and 9000 messages / second you might be able to upgrade your hardware, or at least load balance your channels between 2 servers.
When I open 10,000 connections, it seems to behave quite nicely. Sending half a million messages, I am able to get a throughput of around 9,000 message per second. At this rate “top” shows the nginx process as high as 90% of cpu. If I push it harder, I start to receive SIGIO in the nginx main log and the writer/poster is throttled down meaning a lower throughput but all messages appear to get through to the clients on the other machine. However, when I perform the same tests but with 50,000 connections I see a similar pattern of throughput up to about 6,000 or 7,000 messages/second. As before, when I push faster I get the same SIGIO in the log but the difference is not all the messages get through to clients!
[…later down the page]
Many thanks for your explanation. Your suspicion was correct. I was using the default 30sec for that parameter. I tried upping it to 5m and I was able to receive messages more reliably with 50,000 clients connected. Sometimes, however, the rate at which messages were sent from nginx slowed right down. e.g. I could get 9,000/sec for a sustained minute or so and then when the poster stopped posting, the rate of messages would slow almost to a stop but not quite until all messages were successfully sent.