The big difference between XmlSlurper and XmlParser is that the Parser will create something similar to a DOM, while Slurper tries to create structures only if really needed and thus uses paths, that are lazily evaluated. For the user both can look extremely equal. The difference is more that the parser structure is evaluated only once, the slurper paths may be evaluated on demand. On demand can be read as "more memory efficient but slower" here. Ultimately it depends how many paths/requests you do. If you for example want only to know the value of an attribute in a certain part of the XML and then be done with it, XmlParser would still process all and execute your query on the quasi DOM. In that a lot of objects will be created, memory and CPU spend. XmlSlurper will not create the objects, thus save memory and CPU. If you need all parts of the document anyway, the slurper looses the advantage, since it will create at least as many objects as the parser would.
Both can do transforms on the document, but the slurper assumes it being a constant and thus you would have to first write the changes out and create a new slurper to read the new xml in. The parser supports seeing the changes right away.
So the answer to question (1), the use case, would be, that you use the parser if you have to process the whole XML, the slurper if only parts of it. API and syntax don't really play much a role in that. The Groovy people try to make those two very similar in user experience. Also you would prefer the parser over the slurper if you want to make incremental changes to the XML.
That intro above also explains then what is more memory efficient, question (2). The slurper is, unless you read in all anyway, then the parser may, but I don't have actual numbers about how big the difference is then.
Also question (3) can be answered by the intro. If you have multiple lazy evaluated paths, you have to eval again, then this can be slower than if you just navigate an existing graph like in the parser. So the parser can be faster, depending on your usage.
So I would say (3a) reading almost all nodes itself makes not much of a difference, since then the requests are the more determining factor. But in case (3b) I would say that the slurper is faster if you just have to read a few nodes, since it will not have to create a complete structure in memory, which in itself already costs time and memory.
As for (3c)... these days both can update/transform the XML, which is faster is actually more linked to how many parts of the xml you have to change. If many parts I would say the parser, if not, then maybe the slurper. But if you want to for example change an attribute value from "Fred" to "John" with the slurper, just to later query for this "John" using the same slurper, it won't work.