I'm going to base this on networkx's implementation.
Bidirectional Dijkstra stops when it encounters the same node in both directions - but the path it returns at that point might not be through that node. It's doing additional calculations to track the best candidate for the shortest path.
I'm going to base my explanation on your comment (on this answer )
Consider this simple graph (with nodes A,B,C,D,E). The edges of this graph and their weights are: "A->B:1","A->C:6","A->D:4","A->E:10","D->C:3","C->E:1". when I use Dijkstra algorithm for this graph in both sides: in forward it finds B after A and then D, in backward it finds C after E and then D. in this point, both sets have same vertex and an intersection. Does this is the termination point or It must be continued? because this answer (A->D->C->E) is incorrect.
For reference, here's the graph:
When I run networkx's bidirectional dijkstra on the (undirected) network in the counterexample you claimed that comment: "A->B:1","A->C:6","A->D:4","A->E:10","D->C:3","C->E:1"
: it gives me: (7, ['A', 'C', 'E'])
, not A-D-C-E
.
The problem is in a misunderstanding of what it's doing before it stops. It does exactly what you're expecting in terms of finding nodes, but while it's doing that there is additional processing happening to find the shortest path. By the time it reaches D
from both directions, it has already collected some other "candidate" paths that may be shorter. There is no guarantee that just because the node D
is reached from both directions that ends up being part of the shortest path. Rather, at the point that a node has been reached from both directions, the current candidate shortest path is shorter than any candidate paths it would find if it continued running.
The algorithm starts with two empty clusters, each associated with A
or E
{} {}
and it will build up "clusters" around each. It first puts A
into the cluster associated with A
{A:0} {}
Now it checks if A
is already in the cluster around E
(which is currently empty). It is not. Next, it looks at each neighbor of A
and checks if they are in the cluster around E
. They are not. It then places all of those neighbours into a heap (like an ordered list) of upcoming neighbors of A
ordered by pathlength from A
. Call this the 'fringe' of A
clusters ..... fringes
{A:0} {} ..... A:[(B,1), (D,4), (C,6), (E,10)]
E:[]
Now it checks E
. For E
it does the symmetric thing. Place E
into its cluster. Check that E
is not in the cluster around A
. Then check all of its neighbors to see if any are in the cluster around A
(they are not). Then creates the fringe of E
.
clusters fringes
{A:0} {E:0} ..... A:[(B,1), (D,4), (C,6), (E,10)]
E:[(C,1), (A,10)]
Now it goes back to A
. It takes B
from the list and adds it to the cluster around A
. It checks if any neighbor of B
is in the cluster around E
(there are no neighbors to consider). So we have:
clusters fringes
{A:0, B:1} {E:0} ..... A:[(D,4), (C,6), (E,10)]
E:[(C,1), (A,10)]
Back to E
: we add C
tot he cluster of E
and check whether any neighbor of C
is in the cluster of A
. What do you know, there's A
. So we have a candidate shortest path A-C-E, with distance 7. We'll hold on to that. We add D
to add to fringe of E
(with distance 4, since it's 1+3). We have:
clusters fringes
{A:0, B:1} {E:0, C:1} ..... A:[(D,4), (C,6), (E,10)]
E:[(D,4), (A,10)]
candidate path: A-C-E, length 7
Back to A
: We get the next thing from its fringe, D
. We add it to the cluster about A
, and note that its neighbor C
is in the cluster about E
. So we have a new candidate path, A-D-C-E
, but it's length is greater than 7 so we discard it.
clusters fringes
{A:0, B:1, D:4} {E:0, C:1} ..... A:[(C,6), (E,10)]
E:[(D,4), (A,10)]
candidate path: A-C-E, length 7
Now we go back to E
. We look at D
. It's in the cluster around A
. We can be sure that any future candidate path we would encounter will have length at least as large as the A-D-C-E
path we have just traced out (this claim isn't necessarily obvious, but it is the key to this approach). So we can stop. We return the candidate path found earlier.