Performance and other feedback
I am investigating this library as I require dynamic firewalling. It is most likely in the end I'll get away with an iptables wrapper (which is annoying because it's not native) but thought I would leave some pointers...
I don't know why but in the online documentation nfq is marked as depreciated. It is also potentially worth mentioning that the only way to get something such as REJECT might be to set a mark.
The biggest issue with using this for firewalling is going to be performance. Especially given this is single threaded, going through JS, can quick saturate buffers, etc.
The pcap implementation you suggest to use isn't big on performance. It is not lazy for example. It'll decode an entire Ipv4 packet and potentially the protocol beneath it. I am not sure if there is a better library out there but users should be aware if they are serious about performance they might want to roll their own protocol decoding which should be several times faster than using the pcap library. The pcap library also pulls in some things you wont need complicating your build.
Users could also get a very slight performance boast if they use the bindings directly.
One of the easiest ways to improve performance in the extension is to let the user specify how much of the payload they want to access from JS. The ability to peek. In my use case I would rarely want to touch anything but the IP and TCP header. The main reason for doing this outside of iptables this is because rules are highly dynamic. This implementation appears to copy the entire payload into JS. Chances are on average people are only going to want 10% of the payload and a fixed prefix. If people want deeper inspection in certain cases they would probably filter that to another specialised queue. A better option would be some kind of lazy of immutable buffer with one exception. If you don't have to send the whole packet back to netfilter then much of it could be discarded early.
The nfq library is not documented brilliantly at a glance. Do you need to send the whole packet back or just the id?
Users may also not want the info structure. Excluding it or making it lazy (wrap it) would likely offer a fair performance gain.
The nfq library also offers batch processing options and you can further add to this with batch processing on the JS interface (pass an array of queues). This could significantly reduce the overhead per packet processed. It sort of goes against the node.js way. Large batches mean significant blocking. However how much blocking is tolerable is really on the user so that could be tunable with something such as batch size. Relatively small batches, for example ten packets or a hundred packets with peek could significantly improve performance with little impact on blocking or latency. As batches grow larger the returns diminish.
What a batch of 50 potentially means is that another data source might trigger an event in JS 25 packets in it'll have to wait 25 packets before being able to run and those 25 packets will be processed with stale rules. In most use cases that wont be a problem. Perfect order is not always needed. I am also not sure if you don't get that effect anyway with libuv.
@joeyhub I've developed a small user-space queued packet management system that uses Electron and React in the frontend, but had concerns about performance as per your notes. The end goal is to also have some intelligent management of blocking which would be more dependent on these performance concerns, but for now; managing these packets case by case, for my implementation; performance isn't a huge problem.
I also entirely agree with your notes on the nfq library and it's documentation. This has been a rather large hold up for me. Lots of source code investigation on simple things. It's not hard to see why the uptake of nftables isn't at leaps and bounds.
Is your project open source? I'd be interested in reviewing further.