No, there is no magic bullet.
(As an aside, you have to realize that there is no such thing as a "directory" in S3. There are only objects with paths. You can get directory-like listings, but the '/' character isn't magic - you can get prefixes by any character you want.)
As someone pointed out, "pre-zipping" them can help both download speed and append speed. (At the expense of duplicate storage.)
If downloading is the bottleneck, it sounds like your are downloading serially. S3 can support 1000's of simultaneous connections to the same object without breaking a sweat. You'll need to run benchmarks to see how many connections are best, since too many connections from one box might get throttled by S3. And you may need to do some TCP tuning when doing 1000's of connections per second.
The "solution" depends heavily on your data access patterns. Try re-arranging the problem. If your single-file downloads are infrequent, it might make more sense to group them 100 at a time into S3, then break them apart when requested. If they are small files, it might make sense to cache them on the filesystem.
Or it might make sense to store all 5000 files as one big zip file in S3, and use a "smart client" that can download specific ranges of the zip file in order to serve the individual files. (S3 supports byte ranges, as I recall.)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…