Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I only tested NATS using JetStream and I struggled with the throughput in Python. I probably used it wrong. But your comment may imply that jetstream is slow.


I think sometimes the client bindings are/were in need of improvement.

As an example, the C# API was originally very 'go-like' and written to .NET Framework, didn't take advantage of a lot of newer features... to the point a 3rd party client was able to get somewhere between 3-4x the throughput. This is now finally being rectified with a new C# client, however it wouldn't surprise me if other languages have similar pains.

I haven't tested JetStream but my general understanding is that you do have to be mindful of the different options for publishing; especially in the case of JetStream, a synchronous publish call can be relatively time consuming; it's better to async publish and (ab)use the future for flow control as needed.


I didn’t mean to imply that Jetstream is slow. It’s just that I did my benchmarks without it. On a local PC, with 10 KiB messages sent (synchronously) in a loop, I could transfer 3.2 GiB over 5 seconds with 0.2 nanoseconds latency. Performing the same test with RabbitMQ, I got even better throughput out of the box, but way worse latency.

Those numbers are for server 2.9.6 and .NET client 1.0.8.


we used it for some low latency stuff in python. it was about 10ms to enqueue at worst. However we were using raw NATS, and had a clear SLA that meant that the queue was allowed to be offline or messages lost, so long as we notified the right services.


I’m not familiar with the Python lib but it could be waiting for streams to acknowledge each message reception/persistence before sending the next one. Some clients allow transactions to run in parallel e.g. with futures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: