I have a long, lazy sequence that I want to reduce and test lazily. As soon as two sequential elements are not =
(or some other predicate) to each other, I want to stop consuming the list, which is expensive to produce. Yes, this sounds like take-while
, but read further.
I wanted to write something simple and elegant like this (pretending for a minute that every?
works like reduce
):
(every? = (range 100000000))
But that does not work lazily and so it hangs on infinite seqs. I discovered that this works almost as I wanted:
(apply = (range 100000000))
However, I noticed that sequence chunking was resulting in extra, unnecessary elements being created and tested. At least, this is what I think this is what happening in the following bit of code:
;; Displays chunking behavior in groups of four on my system and prints 1 2 3 4
(apply = (map #(do (println %) %) (iterate inc 1)))
;; This prints 0 to 31
(apply = (map #(do (println %) %) (range)))
I found a workaround using take-while
, and count
to check the number of elements taken, but that is rather cumbersome.
Should I politely suggest to Rich Hickey that he make some combination of reduce
and every?
short circuit properly, or am I missing some obvious way that already exists?
EDIT: Two kind people posted solutions for avoiding chunking on the lazy sequences, but how do I avoid chunking when doing the apply
, which seems to be consuming in chunked groups of four?
EDIT #2: As Stuart Sierra notes and I discovered independently, this isn't actually chunking. It's just apply acting normally, so I'll call this closed and give him the answer. I included a small function in a separate answer to do the reduce'ing part of the problem, for those who are interested.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…