Edit: Apparently (based on a comment below), nodejs has only one thread pool shared across all the worker threads. If that's the case, then the only way to get a separate pool per disk would be to use multiple processes, not multiple threads.
Or, you could enlarge the worker pool and then make your own queuing system that only puts a couple requests for each separate disk into the worker pool at a time, giving you more parallelism across separate drives.
Original answer (some of which still applies):
Without worker threads, you will have a single libuv thread pool serving all disk I/O requests. So, they will all go into the same pool and once the threads in that pool are busy (regardless of what disk they are serving), new requests will be queued in the order they arrive. This is potentially less than ideal because if you have 5 requests for drive A and 1 request for drive B and 1 request for drive C, you would like to not just fill up the pool with 5 requests for drive A first because that will make the requests for drive B and drive C wait until several requests on drive A are done before they can get started. This loses some opportunities for some parallelism across the separate drives. Of course, whether you truly get parallelism on separate drives also depends upon the drive controller implementation and whether they actually have separate SATA controllers or not.
If you did use worker threads, one nodejs worker thread for each disk, you can at least guarantee that you have a separate pool of OS threads in the thread pool for each disk and you can make it much more likely that no set of requests for one drive will keep the requests for the other drives from getting a chance to start and miss their opportunity to run in parallel with requests to other drives.
Now, of course, all of this discussion is theoretical. In the world of disk drives, controller cards, operating systems on top of the controllers with libuv on top of that with nodejs on top of that, there are lots of opportunities for the theoretical discussion to not bear out in real world measurements.
So, the only way to really know for sure would be to implement the worker thread option and then benchmark compare it to a non-worker thread option with several different disk usage scenarios, including a couple you think might be worst case. So, as with any important performance-related question, you will inevitably have to benchmark and measure to know for sure one way or the other. And, your results will need very careful construction of the benchmark tests too in order to be maximally useful.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…