Talk:Strongly connected component

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

There is also[edit]

  • There is also an algorithm called SCC that computes strongly connected components in graphs, by taking the inverse of a graph and working on the transpose G^t (V, E^-1) of G. Would it help if I put the pseudocode algorithm in here? Jontce 11:08, 17 May 2005 (UTC)[reply]
  • Isn't this a dictionary definition? At least, it should be linked to something else. m.e. 10:40, 28 May 2004 (UTC)[reply]

This page does not render correctly in IE7

Question about the algorithm[edit]

In step 1, DFS is called on a specific vertex v.

It needs to visit every vertex in the (weakly) connected component that v belongs to, to compute its finishing time.

In a directed graph, how can we choose the start node v for DFS in such a way as to visit every node in the weakly connected component of v and not add to the total complexity of the algorithm? This does not seem obvious to me. Thanks if you can explain this.

- Panayk (talk) 16:40, 18 January 2008 (UTC)[reply]

Nevermind. DSF is called on more than one starting vertices, taking care not to call it on (or revisit during a call) an already visited vertex. Perhaps we should mention that.

- Panayk (talk) 18:16, 18 January 2008 (UTC)[reply]

Algorithm[edit]

The algorithm section seems to be lifted word-for-word from CLR... —Preceding unsigned comment added by 67.114.158.146 (talk) 08:05, 9 April 2008 (UTC)[reply]


Is the image correct?[edit]

I see that C and H are not connected, is that correct? —Preceding unsigned comment added by 128.61.23.186 (talk) 19:36, 5 June 2008 (UTC)[reply]

yes, it is ok. They don't need to be directly connected by edges. --Fioravante Patrone en (talk) 20:08, 21 August 2008 (UTC)[reply]
I think he's right, since the definition is a PATH between two points. C can get to H and vice versa over a path. It's the fact that you can go both ways that makes that group strongly connected Delvarworld (talk) 08:22, 17 April 2011 (UTC)[reply]

Definition[edit]

The current definition (and the example) of a strongly connected component subtly avoids the delicate question whether a singleton node without selfloop is strongly connected or not.--94.221.120.119 (talk) 19:46, 22 January 2012 (UTC)[reply]

It does sort of avoid the question, but I think the only reasonable choice is to count a single node (without selfloop) as strongly connected to itself, by paths of lengths zero. For otherwise, it would not be true that strong connectivity is an equivalence relation and that the strongly connected components form a partition of any graph's vertices. —David Eppstein (talk) 22:30, 22 January 2012 (UTC)[reply]

Wrong definition of acyclic graphs[edit]

The article says that a graph is acyclic if and only if it has no strongly connected subgraphs with more than one vertex. However, a vertex with a selfloop fulfill this condition but is not acyclic. 141.3.44.192 (talk) 08:27, 13 September 2012 (UTC)[reply]

Interesting point. Does a node on its own count as 'strongly connected'? I think yes, as there is a path (of length zero) from each node to itself. Therefore, the only subgraph that is strongly connected, but acyclic, is a single vertex without a self-loop. Maybe it should be "a graph is acyclic if and only if it has no strongly connected subgraphs with at least one edge." Aaron McDaid (talk - contribs) 09:20, 13 September 2012 (UTC)[reply]

Different (graph) condesations?[edit]

There are some other possible notions of condensation, where the type of subgraph condensed isn't a strongly connected component. For example McCabe's algorithm for computing the essential complexity of a CFG (iteratively) condenses only the sub-graphs that have a single entry (source) and single exit point (sink), which are of course the structured programming control structures. Perhaps condensation isn't quite the proper term for that (it seems to also involve a sort of fixpoint iteration), but that's the term I found in a textbook (which has something like 3 editions, so it's probably not complete bunk) when I was looking for secondary sources for McCabe's paper: Paul C. Jorgensen (2002). Software Testing: A Craftsman's Approach, Second Edition (2nd ed.). CRC Press. p. 150. ISBN 978-0-8493-0809-3.

That book also defines a couple more condensation that like along 2-components, which is for condensing/finding DD-paths. (You can use the google books search function to find them inside.)

In modern compilers, the condensation (although called "reducibility" as McCabe called his stuff in original) is actually done on a different type of sub-graphs called natural loops, defined as "single-entry, multiple-exit loop, with only a single branch back to the entry from within it". (This done so it fits better with languages that have break statements etc. In fact languages like Modula-2 have been designed so they emit natural loops when compiled.) This is standard material in most compiler textbooks; it's in the Dragon Book etc.

I'm not sure if these are worth mentioning here of if Condensation (graph theory) should perhaps be the place. Isn't there some general pattern-matching type of condensation defined somewhere? The above clearly are particularizaitions of a similar template, just with the sub-graphs matched changed. Also, they're all pattern matching just on sub-graphs to condense, there's no other (programming language) notion involved at the CFG level. (Unlike flow charts, CFGs don't have node types or branch labels etc.) 188.27.81.64 (talk) 08:19, 17 July 2014 (UTC)[reply]

If the word "condensation" is used in compiler graph theory to mean something different than its meaning in more theoretical branches of graph theory, and that meaning is notable enough, then we should probably have a separate article about it. Wikipedia articles are about the meanings, not about the words, so different meanings of the same word shouldn't be packed into the same article. —David Eppstein (talk) 03:03, 18 July 2014 (UTC)[reply]
After looking at more sources, I think that author (Jorgensen) is using the term condensation (outside of just SCCs) pretty idiosyncratically. I think he means something like modular decomposition, but for directed graphs. I wrote a stub on flow graph with more standard terminology; Jorgensen calls a flow graph a program graph which is an incredibly overloaded term by various other authors, so it took me a while to even find other references talking about the same concept... While the guys writing about flow graphs define a notion of prime flowgraph, it seems to me it's more restrictive than modular decomposition (even ignoring orientation issues) because the decomposition of flow graphs is defined to be limited to certain operations that make sense only for flowgraphs like nesting and sequencing. So, it probably doesn't makes sense to put "condensations" (or whatever they might be called) of flowgraphs on other, more general graph pages. 188.27.81.64 (talk) 15:36, 19 July 2014 (UTC)[reply]

Short description and Definition express different things[edit]

In the short description in the top it says: "In the mathematical theory of directed graphs,a graph is said to be strongly connected if every vertex is reachable from every other vertex". This is congruent to the picture given. In the Definitions-Section though, it states: "A directed graph is called strongly connected if there is a path in each direction between each pair of vertices of the graph.". This is a much harder and totally different criteria to be strongly connected.Now we need a duplex-connection between nodes. --BookRings (talk) 11:14, 26 July 2015 (UTC)[reply]

When it says "a path in each direction" it means two different paths. It doesn't have to be the sme path! In fact it can't possibly be the same path, because a path in a directed graph has to be consistently directed from its start towards its end. So the two definitions are the same. —David Eppstein (talk) 18:15, 26 July 2015 (UTC)[reply]

Also: Short description and Definition express different things[edit]

The short description does not require that SCCs are maximal. The definition using the equivalence on strongly connected nodes implies that SCCs are maximal. For the example: {a,b,e} is an SCC for the short describtion but not for the definition. — Preceding unsigned comment added by 2001:638:807:20D:FAB1:56FF:FEA6:40DF (talk) 16:09, 15 November 2019 (UTC)[reply]

The short description is both inaccurate as you say and too long. Any suggestions for how to make it both shorter and more accurate? —David Eppstein (talk) 18:39, 15 November 2019 (UTC)[reply]

Suggestion (Note that the article is about "strongly connected components" and not about the property of "strongly connected"). In the mathematical theory of directed graphs, a strongly connected component (or diconnected component) of a graph G is a maximal subgraph of G in which every vertex is reachable from every other vertex. The rest of the intro: "The strongly connected components (or diconnected components) of an arbitrary directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V+E))." seems to be covered by the article. — Preceding unsigned comment added by 2001:638:807:20D:FAB1:56FF:FEA6:40DF (talk) 10:00, 18 November 2019 (UTC)[reply]

The short description needs to be a noun phrase rather than a noun, and ideally only maybe ten syllables long. —David Eppstein (talk) 17:01, 18 November 2019 (UTC)[reply]

Removed alternative name 'diconnected'[edit]

I've removed the synonym 'diconnected' from the article for the following reasons:

  1. It is not commonly used. The major references use only 'strongly connected'.
  2. It is easy to misread as 'disconnected' which has the opposite meaning to 'strongly connected' and so could cause confusion.
  3. Editors are repeatedly 'fixing' it to read 'disconnected' which means the article is just wrong.

Perokema (talk) 10:25, 5 March 2020 (UTC)[reply]