Talk:Amdahl's law

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Suggestion for a simpler/more general introduction[edit]

Hi, may I suggest a simpler version of the introduction, which explains the significance of the law and also sets it in a more general context:

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and defines a theoretical maximum for the ability of systems to scale up. "Scaling up" refers to the ability to do more work by adding more resources. The critical parameter in Amdahl's law is the percentage of work which cannot be parallelized - in other words, work which must be performed by one central component and cannot be divided between resources. The basic finding of the law is that as long as there is a non-zero percentage of work which cannot be parallelized, the system will not be able to scale beyond a certain theoretical maximum. The law also shows how to compute the theoretical maximum scale depending on the percentage, and some other parameters.

Amdahl's law is most often applied to parallel computing. In this context, the speedup of a program using multiple processors is limited by the time needed for the "sequential fraction" of the program to run - this is the percentage of work which cannot be parallelized. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.

Anne.naimoli (talk) 12:27, 13 September 2011 (UTC)[reply]

References

  1. ^ Rodgers 85, p.226

The application of this would tie well to the concepts of Critical Path Method for project management. The tracing of the shortest path identifies the shortest time of execution. Please note a link to http://en.wikipedia.org/wiki/Critical_path_method — Preceding unsigned comment added by JimDelRossi (talkcontribs) 22:17, 23 January 2012 (UTC)[reply]

The explanations in this article are pretty horrible. In fact other articles are also not so good, because of errors, but this one isn't too bad - http://software.intel.com/en-us/articles/amdahls-law-gustafsons-trend-and-the-performance-limits-of-parallel-applications/ though there is an incorrect sign in one part. If we use P for the proportion of time spent on the parallelisable part of the task, then we can write P=1-S, so that the expression is really quite easily seen to be Speedup (N) = 1/ (S + P/N) - offset terms. Then the details do become really rather obvious. The first section of this article is far too confusing, and if needed at all, should come after the basic expression shown here. If the offset terms (due to communications) are ignored, then if N is allowed to "proceed to infinity" it is easy to see that the maximum Speedup is 1/S. As commented, the performance/price ratio deteriorates if attempts are made to use more than 1/S processors/threads/cores. I don't have time to sort this myself, but the article really could be tidied up a lot. David Martland (talk) 09:37, 9 March 2012 (UTC)[reply]

Irony[edit]

This article (itself) is longer, has more math, and more graphs than Amdahl's own original article. 198.123.56.223 (talk) 18:52, 12 April 2012 (UTC)[reply]

Error in graph scales[edit]

Depending on your perspective the graph is either scaled incorrectly or the Y axis is mislabeled. It states that a single processor is twice as fast as a single processor, i.e. speedup and speed multiplier have been confused. Justin Urquhart Stewart (talk) 02:28, 29 August 2014 (UTC)[reply]

Please clarify. Each curve on the graph starts at (1,1) [and this was true even when you posted your comment]. According to Speedup, a "speedup" value of 1 represents no improvement at all. So what are you talking about? - 72.182.55.186 (talk) 00:02, 27 February 2018 (UTC)[reply]

Limitations of Amdahl's law, and ways to mitigate it using frequency scaling[edit]

It seems like it would be appropriate to discuss turbo boost (and other frequency scaling techniques) in this article since one of their purposes is to mitigate the effects of Amdahl's law. See http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1431565, here's a quote from their abstract "In order to mitigate the effects of Amdahl's law, in this paper we make a compelling case for varying the amount of energy expended to process instructions according to the amount of available parallelism."

I personally do not know how successful these methods are, currently, as I have seen some more recent papers that show only a modest performance gain (around 5%) for both applications with limited and with copious amounts of parallelism. It seems worthwhile to mention these techniques in this article, however.

In addition, Amdahl's law is not considered to be as useful as "work/span analysis" in the analysis of parallel computations. See chapter 2 of Structured Parallel Programming (http://books.google.com/books?id=2hYqeoO8t8IC). For more information about work/span analysis you can also see intel's brief tutorial here: https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/GUID-83D8631F-8ECA-498A-A091-CF952FE77A24.htm

I think that we need a separate article on work/span analysis on wikipedia. If I get some time I'll try and write one, but if someone else in the wikipedia world is interested in doing so let me know and I'll be happy to lend a hand. -Tim Kaler 20:24, 15 October 2014 (UTC) — Preceding unsigned comment added by Timkaler (talkcontribs)

Inequality[edit]

Isn't there some value to point out that Amdahl's law functions as a limit or at least an inequality? s is really <= the rest. — Preceding unsigned comment added by Jimperrywebcs (talkcontribs) 15:55, 13 November 2015 (UTC)[reply]