Talk:Primitive recursive function

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Old comments[edit]

Comments on previous changes:

  1. the initial set of X: they are axioms, not functions or terms. they are statements.
  2. successor: you had better not use + in the definition. we haven't defined addition.
  3. projection: the s suffix on words makes things plural.
  4. p.r. versus primitive recursive: I personally think it was nicer with the abbreviation, but it's not a strong opinion.

I spent a lot of time in crafting this page. Please do me the courtesty of investing a similar degree of effort in your changes. --Wmorgan

Regarding number 2: I was assuming knowledge of the natural numbers and their addition. If you want to assume only Peano's axioms and start everything from there, you should link to the Peano axioms and not to "natural number line". Or do you want to start from a primitive, "geometric" understanding of the natural number line? --AxelBoldt

I think you have a point that "natural number line" does presuppose some ideas which I shouldn't assume are defined at this point. But certainly as addition is defined later on in the page I think it would be safest not to use it explicitly quite yet. Actually I'm a bit unclear as to whether the Peano postulates should be assumed or not. The textbook that I have on this subject (Epstein & Carnelli) uses the phrase "that number which follows n in the natural number series". Thoughts?

(BTW in rereading my comments above I come off as fairly obnoxious. That wasn't my intention, so please accept my apologies.)

And one last comment... I think I will change the definition of the set of p.r. functions to an inductive one rather than using terms like "smallest" and "closed", as upon further study, the latter, taken in the set-theoretic sense, presupposes infinite classes of functions, whereas with the former we can stay safely away from the i- word. --Wmorgan

You need at least the Peano axioms; otherwise you cannot talk about natural numbers in any meaningful way. I would probably not even bother; just mention the natural numbers, link to them, and be done with it. Then later show that the "intuitive addition" of natural numbers can be defined as a p.r. function. Everybody knows the natural numbers (or they think they do).

Also, a nice example of a p.r. function would be f(x,y) = 1 if x < y and 0 if xy. And maybe a typical function that's recursive but not p.r. --AxelBoldt

Diagonal argument[edit]

[about the diagonal argument:] Note that this argument can be applied to any class of functions that can be enumerated in this way. (One might think this implies that any explicit list of the computable functions cannot be complete, but that's false because of the undecidability of the Halting Problem.)

(One might think this implies that any explicit list of the computable functions cannot be complete, but that's false because of the undecidability? of the Halting Problem.)

I changed this back, clarifying that we only talk about total computable functions here. Indeed, an explicit (in the sense of recursively enumerable) list of total computable functions can never be complete, as the diagonal argument shows.

It is possible to have a complete explicit list of partial computable functions though. I guess that's the misunderstanding. 199.17.234.96

I don't understand what the point of this 'diagonal argument' is. To show that there are computable functions that are not Primitive Recursive one just exhibits one -- such as Ackermann (as is written at the end of the article). Does anyone have an the objection to snipping this bit out? --Sam Staton 15:56, 23 May 2006 (UTC)[reply]

The 'diagonal argument' is necessary. Merely exhibiting the Ackermann function isn't a proof. You must prove that it grows faster than any primitive recursive function. I suspect that proof is at least as long as the 'diagonal argument' proof. SpectatorRah (talk) 00:14, 3 April 2010 (UTC)[reply]

Gödel's incompleteness theorem[edit]

An essential piece of Gödel's theorem uses primitive recursive functions...should that be noted here?

Sentence structure[edit]

Many of the functions normally studied in number theory, and approximations to real-valued functions, are primitive recursive, such as addition, division, factorial, exponential, finding the nth prime, and so on (Brainerd and Landweber, 1974).

It hurts to read this sentence. Fredrik Johansson 00:53, 30 April 2006 (UTC)[reply]

Fixed, I think. Adam (talk) 04:02, 23 October 2008 (UTC)[reply]

Zero-based or One-based numbering?[edit]

Whoever is in charge of this article should make up his/her mind whether he/she is using zero-based numbering or one-based numbering for lists of functions and for their arguments. JRSpriggs 03:14, 30 July 2006 (UTC)[reply]

Since no one else did anything, I converted the article to consistent one-based numbering. Please let us keep it that way. JRSpriggs 03:57, 3 August 2006 (UTC)[reply]

ideas to improve the article[edit]

Hi. I made a comment on the math Wikiproject talk page that this article needed some improvement. An editor has asked me to give more precise comments so here goes.

  • My main criticism is that the page is not readable enough for the layman. Of course, this is not a really easy subject but still, I think the intro for one should say a bit more. For instance giving a quick definition of a computable function and some basic intuition as to why anyone would consider primitive recursive functions in the first place.
  • There's not enough discussion as to why these were considered.
  • point 2 of the definition should be restated in a cleaner way.
  • again, following the definition, there should be an attempt at conveying some intution, enough so that people would accept the proof that addition is primitive recursive without involving the projections.
  • the limitations section begins by saying that primitive recursive correspond to our intuitive notion of computable. In the computer age, I think that's pretty much incorrect.

The article is not that bad, but it could need some help if one wants to rely on it for other articles, notably the Ackermann function. Pascal.Tesson 19:31, 18 September 2006 (UTC)[reply]

Since your criticisms are not entirely clear to me, I suggest that you try to fix them yourself. If you mess it up, I will correct you.
About defining computable functions in the introduction, I moved that stuff out of the introduction specifically to make the introduction easier to read. If you use the Turing machine definition, it is completely different from the definition of primitive recursive functions and would only confuse beginners. If you use the definition with mu-recursion, then it is even more complicated than primitive recursion. So it would also confuse people. And I wanted to get the characterization of primitive recursive functions in terms of Turing machines and the Ackermann function out of the introduction because it would look like the definition which it is not.
Use of the projections cannot be avoided without making the other stuff even worse.
Any function which can be calculated by an actual physical computer is primitive recursive on its domain, but there are many primitive recursive functions which cannot be calculated by any physical computer. JRSpriggs 07:04, 19 September 2006 (UTC)[reply]

Nonsense?[edit]

I dont know how I landed up here but this doesnt makes sense !

"The primitive recursive functions are defined using primitive recursion and composition as central operations and are a strict subset of the recursive functions (recursive functions are also known as the computable functions)."

What is primitive recursion? the article becomes denser by each word :( 122.169.148.119 (talk) 20:32, 25 November 2007 (UTC)[reply]

Sorry, but this is a difficult concept. The definition is in the first section after the lead. However, I could over-simplify it by saying that primitive recursive functions are those which can be calculated within a predetermined number of steps (which depends on inputs). JRSpriggs (talk) 03:35, 26 November 2007 (UTC)[reply]

Critique/Suggestions To Improve the Article[edit]

This article lacks concise and comprehensive definitions of what it is trying to describe. Declaring relationships or information that may be relevant to a topic without precisely defining that topic leads to the obfuscation of and abandonment of concise definition. If you feel that the material is a difficult concept, then please abandon the approach of providing strict declarative knowledge, which really says nothing precisely about a topic at all, only relationships relevant to that topic. Since this particular topic is of interest to computer scientists, a strictly mathematical approach is of good use but not sufficient. I look forward to seeing an updated page. 14:39, 9 May 2008 —Preceding unsigned comment added by Org322001 (talkcontribs)

---

The simple one-accumulator register machine might offer a (non-strict) example of the five axioms. In this example the axioms boil down into a few primitive machine operations:

1 Constant function: 0 is primitive recursive: CLEAR r, (abbreviated CLR r) where "r" is any register that the machine has access to, symbolized as 0 --> r.
Suppose we have four registers A, P, Q, R. Then CLR P will cause the contents of register P to be 0 (in other words, P will be empty of counts): 0 --> P. Registers A, Q and R will be unaffected.
2 Successor function: INCREMENT r (abbreviated: INC r)
Example: INC P will add one count to register P, the contents symbolized with brackets around the register (cf. Boolos-Burgess-Jeffrey symbolism): [P]+1 --> P
3 Projection function: i.e. we have access to any register in the machine. We need to distinguish the two types: COPY and MOVE. COPY r1,r2 i.e. copy the contents of r2 into r1. This is easier to see if we restrict our model to a machine with a special "accumulator" register that we call "A", and the instruction "LOAD r" ( LDA r) that does the following: [r] --> A.
Example: LDA P, symbolized [P] --> A
Example: LOAD A,P symbolized [P] --> A
4 Composition: STOREA r (abbreviated: STA r). COPY the contents of A -- the contents may be the result of many operations -- into any register (including itself); the contents of A is left untouched.
5 Primitive recursion: The IF ... EQUALS ... THEN GOTO ... operation. We need any two of the following 3 versions:
  • 5.1: IF the contents of register A EQUALS the contents of register r, then JUMP_TO instruction 7 (abbreviated: JE A,r or perhaps, JEA r)
  • 5.2: IF the contents of register A is not equal to the contents of register r then GOTO instruction x (abbreviated: JNE A,r,x or perhaps JNEA r,x. 7: Jump if A is Zero to instruction 7)
  • 5.3: "Unconditional jump" (JUMP_TO instruction x, appreviated J x ).

Primitive recursion actually involves incrementing too, so really, primitive recursion is an IF-THEN LOOP that counts up until the IF THEN is satisfied, at which point the function either terminates (HALT) or leaps to another instruction.

Example: ADD the contents of register P to Q and deposit the result in R
1 CLEAR A ;register A is a counter to be used in the two recursions
2 CLEAR R :register R is where the result will appear
;First primitive recursive loop -- copies the count in register P into result-register R:
;1st primitive-recursive loop:
3 JEA P,7 ;Is (contents of) A equal to (contents of) P? If so then go to instruction 7 else continue to next instruction.
4 INC R
5 INC A
6 JUMP_TO 3
7 CLEAR A ;A will again be used as a counter to add the counts in Q to the counts in R
;2nd primitive-recursive loop:
8 JEA Q,12 ;Is A equal to Q? If so then go to instruction 12 else continue to next instruction.
9 INC R
10 INC A
11 JUMP_TO 8
12 HALT ; [R]=[P]+[Q]

The above is only one version of ADD -- maybe the reader can suggest a better one -- but it illustrates the point. I don't know whether something like this would help a reader or not. One thing that comes out of it is that we know from Turing equivalence that other "instruction sets" are sufficient. Indeed, there are other sufficient axiom sets besides the "Peano set" given in this article. Who is it ... Laszlo Kalmar (1943) it was ... who originated another set similar to the counter machine (i.e. his set starts with constant 1, has INCREMENT, a subtraction (DECREMENT) and the finite sum and finite product -- these are where recursion appears (cf Kleene 1952/1971:285 and p. 526 citing Kalmar 1943). Bill Wvbailey (talk) 16:56, 9 May 2008 (UTC), slight emendation Wvbailey (talk) 18:31, 31 May 2008 (UTC)[reply]

importance[edit]

I noticed the new text on importance in the lede. While being r.e. is a nice property of the primitive recursive functions, I am not sure it is the key reason for their importance. Things that seem more important to me include:

  • Most of the effective procedures that arise in elementary proof theory are primitive recursive. For example, the functions in Goedel's incompleteness theorem are all primitive recursive
  • Primitive recursive arithmetic (PRA) is widely regarded as an embodiment Hilbert's finitism
  • The primitive recursive functions are essy to prove total, unlike computable functions in general. In particular, they are exactly the provably total functions of RCA0.

These things ought to be in the article, actually. Unfortunately, I don't have in mind any sources that explicitly try to explain why the primitive recursive functions are important. — Carl (CBM · talk) 01:33, 23 August 2009 (UTC)[reply]

Carl, you list three reasons. I think your first reason is certainly a nice property of PR functions, but does not distinguish them from general recursive functions. I would be interested in seeing references on your second point; perhaps the reason for this regard might be valid as a reason why the primitive recursive functions are notable. Your last point ought to be added to the reasons for notability (and I intend to do so). AshtonBenson (talk) 00:57, 28 August 2009 (UTC)[reply]
Hrm, on second thought, I have concerns about your third point. The reason why it's easy to prove that they are total is that they are inductively generated (the totality proof proceeds by induction on the enumeration of all PR functions). So the fact that they can be proven total is important, but the reason why this is the case is because they -- as a class -- are RE. That's the "easy" you mention. AshtonBenson (talk) 00:59, 28 August 2009 (UTC)[reply]
The class of all partial recursive functions is also r.e., as is the class of elementary functions, so it is not clear to me that this is really a unique property that makes the primitive recursive functions interesting. Can you say who makes this claim in print? — Carl (CBM · talk) 01:23, 28 August 2009 (UTC)[reply]
The article says "subset of the set of all total recursive functions" -- the class of partial recursive functions does not meet this criteria. What makes PR notable is that it it is the strongest "common" system with all the properties mentioned (and this is what distinguishes it from the elementary functions). Now what I'm really curious to know is why, say, the primitive recursive functions plus a symbol for the Ackermann function, and closed under composition, wasn't chosen instead. I have no answer to that question. AshtonBenson (talk) 03:57, 28 August 2009 (UTC)[reply]
P.S. The reason that the primitive recursive functions are easy to prove total is that all the induction needed to do so in arithmetic is included in the scheme , which consists of the first-order induction scheme limited to formulas. The collection of all functions primitive recursive in the Ackermann function is also r.e. and contains only total functions, but the Ackermann function itself is not provably total in RCA0. So the mere fact that the primitive recursive functions are r.e. is not the key point. — Carl (CBM · talk) 01:36, 28 August 2009 (UTC)[reply]
If there were a name for that class of functions, and if that class had been discovered at the same time as the PR functions (or very shortly thereafter), I bet that class would be notable instead. AshtonBenson (talk) 04:01, 28 August 2009 (UTC)[reply]
The elementary functions are also perfectly notable, although we don't appear to have an article on them. As are the partial recursive functions. — Carl (CBM · talk) 10:39, 28 August 2009 (UTC)[reply]
See ELEMENTARY. What an awful link. I found this via Laszlo Kalmar. Bill Wvbailey (talk) 16:35, 28 August 2009 (UTC)[reply]
Boolos Burgess Jeffrey 2002 say that "Another process, called (primitive) recursion, is what is involved in defining multiplication as repeated addition, exponentiation as repeated multiplication, as so on." (p. 58); we can't get what we need to do arithmetic as we know it with just "zero", "successor", "identity" and "composition". Kleene 1952:217 introduces his Chapter IX "Primitive Recursive Functions" with "To establish the lemma for Goedel's theorm, we shall develop an intuitive theory about a certain class of number-theoretic functions and predicate [etc]...". Minsky implies the machines associated with PR are "not quite so complicated" [as a Turing machine] so explorations of undecidability might be easier (p. 116). That's about all I've got. Bill Wvbailey (talk) 02:02, 23 August 2009 (UTC)[reply]
Bill, thanks for the nice citations in which people mention PR functions, but I don't really think any of those speak to their notability (while they certainly do describe the PR functions!) AshtonBenson (talk) 00:59, 28 August 2009 (UTC)[reply]

Re Ashton, PRA: the identification of PRA with finitism is a long-running discussion in the philosophy of mathematics. Everyone I have read agrees that PRA is included in "finitism"; the point of contention is whether finitism goes beyond PRA or not, with some on each side. It is trivial to find papers where this is discussed using google. One influential paper is:

  • "Remarks on finitism", Wiliam Tait, [1].

Some other works that discuss the issue are:

  • "Partial realizations of Hilbert's program", Stephen Simpson, [2]
Er, you know that this paper merely cites the one above and then moves on, right? It doesn't really add anything on this topic. AshtonBenson (talk) 04:23, 28 August 2009 (UTC)[reply]
  • "Finitistic Properties of High Complexity", Dmytro Taranovsky, [3]
  • "Finitism and intuitive knowledge", Charles Parsons [4]

Here is a quote from Simpson on the FOM email list in 1999 [5]:

"Tait in his 1981 paper argued that Hilbert's finitism is formalized by PRA. This conclusion is widely accepted in the f.o.m. literature. I certainly accept it..."

There are many more references than this; the relationship between PRA and finitism has been thoroughly explored in the literature. — Carl (CBM · talk) 01:13, 28 August 2009 (UTC)[reply]

This merely cites the same paper you cite above by Tait. There's really no additional content there. And, as a side note, I don't know if I'd count the FOMlist as a reliable source ;) AshtonBenson (talk) 04:09, 28 August 2009 (UTC)[reply]
That leaves the actual papers cited above. There are many more authors who discuss the relationship between PRA and finitism, I just included a few references to make the point. — Carl (CBM · talk) 10:39, 28 August 2009 (UTC)[reply]


I guess I don't understand Astonbenson's notion of "notable", or "important".
Please see WP:NOTABILITY
The following is certainly historically important. So I'll offer it up:
Hilbert viewed "recursion" ("primitive", the only kind known at the time he gave his 1928) as the keystone of his formal, axiomatic theory of arithmetic. Hilbert (1926) had conjectured that there might be functions that could not be generated by [primitive] recursion, but Ackermann had not yet presented his (1928) novel "double-recursion", nor had Péter (1935) [reference: Kleene 1952:271]. So, Hilbert, in his 1927 The foundations of mathematics (van Heijenoort pp. 464ff) had only primitive recursion (based on Peano's successor and induction) at his disposal. He says "Finally, we also need explicit definitions, . . . as well as certain recursion axioms, which result from a general recursion schema. . . . For in my theory contentual inference is replaced by manipulation of signs according to rules; in this way the axiomatic method attains that reliability and perfection that it can and must reach if it is to become the basic instrument of all theoretical research."(boldface added, p. 467) He goes on about a page later to define what he means by "recursion" (nothing surprising -- the previous calculation is employed in generating the next calculation see p. 468), and after that, he states "In a corresponding way, the recursion axioms are formula systems that are modeled upon the recursive procedure. ¶ These are the general foundations of my theory. To familiarize you with the way in which it is applied I would like to adduce some examples of particular functions as they are defined by recursion." (p. 469) Here he offers examples of simple recursions starting with ι(0)=0; ι(a')=1, [etc].
I think this strongly supports my point. Had Ackermann's function been known at the time, we probably would have wound up with some class of functions which includes it (and is closed under composition) as our most-notable class of "obviously" terminating functions (termination being obvious because the termination proof can proceed by induction on the recursive procedure for enumerating all the functions in the class). And in fact I bet we'd call them "the primitive recursive functions." Point being, that name and the notability that goes with it was given to the first-to-be-discovered "large" subset of the total recursive functions which was itself RE. AshtonBenson (talk) 04:09, 28 August 2009 (UTC)[reply]
That viewpoint is somewhat ahistorical. Gödel introduced the primitive recursive functions, under the name "recursive", before what we now call computable functions had been developed at all. Later the name "recursive function" became a synonym for "general recursive function" and "primitive recursive function" was used to distinguish that class, which is distinguished from the general recursive functions by having a recursion scheme that is limited in a certain way. Monk (Mathematical logic, p. 34) says, "We feel-that it is only an historical accident that elementary functions are not more widely discussed than primitive recursive functions." — Carl (CBM · talk) 12:10, 28 August 2009 (UTC)[reply]
To sum up, "primitive" recursion provided "the general foundations" of Hilbert's theory that was to be used, in essence, by Goedel a few years later. "Full" recursion was not the driver, "primitive" recursion was. Kleene notes that the formal system he (Kleene) presents (essentially that used by Goedel 1931) "has function symbols only for the three functions ', +, *. ... This proof [Goedel's "Theorem VII: Every [primitive] recursive relation is arithmetic[al]"] is not essential to our program of formalizing number theory. If it did not succeed, we could have arranged instead that recursion equations for other functions besides + and * should be axioms of the system. ... However, it is of some interest that a finite system suffices, the more so that we can get along with the two chief functions + and * of traditional arithmetic, when taken with the logical constants and the predicate =." (boldface added, Kleene 1952:239). Bill Wvbailey (talk) 02:53, 28 August 2009 (UTC)[reply]

Ashton: could you please let everyone know exactly which source you are looking at that explicitly says that the reason the primitive recursive functions is important is that they are r.e.? As I said, I have looked through several books, and what they all say is just that the primitive recursive functions include most of the functions commonly encountered in mathematics. — Carl (CBM · talk) 10:39, 28 August 2009 (UTC)[reply]

Also, I agree with Wvbailey that you seem to be misinterpreting the word "notable". According to WP:N, something is notable exactly when it is discussed by secondary sources. — Carl (CBM · talk) 10:40, 28 August 2009 (UTC)[reply]

---

Somewhat technical history of the development of the notion of "recursion" (as opposed to primitive recursion): In a paper dated 14 July 1931 (published 1932) Herbrand proposed an expanded notion of "recursion"; van Heijenoort notes that, "The functions [Herbrand proposes] are, in fact, (general) recursive functions, and here is the first appearance of the notion of recursive (as opposed to primitive recursive) function" (van Heijenoort's commentary before Herbrand 1931b On the consistency of arithmetic1965:619). In a note dated 22 January 1931 (published 1932) On Completeness and Consistency Goedel begins with "Let Z be the formal system that we obtain by supplementing the Peano axioms with the schema of definition by recursion (on one variable) and the logical rules of the restricted functional calculus" (boldface added, van Heijenoort 1967:616). Goedel (1931) is thus aware of Ackermann (1928).
Then in spring 1934 at his IAS Princeton lectures (Davis 1965:41-71 + Goedel's 1964 Postscriptum 1965:71-73) Goedel states that "Recursive functions have the important property that, for each given set of values of the arguments, the value of the function can be computed by a finite procedure"; he goes on to postulate, in footnote #3, that the converse is true if we admit "recursions of other forms . . . this cannot be proved, since the notion of finite computation is not defined, but it serves as a heuristic principle." (Davis 1965:44). Goedel then tacks on a sub-lecture "9. General recursive functions" (Davis 1965:69-71), giving an example of a function not recursive in his restricted sense of [primitive] recursion, and noting that he needs to "generalize our β function". (In footnote 34 he mentions that this notion is attributable to a private communtication between him and Herbrand. But later, ca 1960's communications with Davis, he observes problems with Herbrand's shifting definition that are really technical (meaning: I don't understand them, cf curly-braces footnote #34 in Davis 1965:70)). Goedel goes on to discuss Herbrand's notion, restrict it a bit, and then propose this as an expanded definition of "recursion"; thus ends his 1934 lectures.
All of this has to do with Hilbert's Entsheidungsproblem and the attempt to arrive at a suitable defintion of computability; by 1936 Church will present "an unsolvable problem of elementary number theory", Turing appears in 1936-7. Indeed, in his commentary before the 1934 lectures Davis notes that Goedel stated (in a letter to him) that "he was, at the time of these lectures, not at all convinced that his concept of recursion comprised all possible recursions" (Davis commentary in Davis 1965:40). Indeed, in both Davis and van Heijenoort, we observe that by 1963-4 Goedel had abandoned recursion altogether and proposed Turing's "general notion of formal system" in its place (Davis 1965:71-73, van Heijenoort 1967:616); Goedel's take on lambda-calculus was that it was "much less suitable for our purpose" (footnote * in Davis 1965:72). Bill Wvbailey (talk) 16:25, 28 August 2009 (UTC)[reply]

The historical reason why primitive recursive functions came to preeminence is that for a while (in the 1920s) they were believed to be sufficient to express computation. Ackermann's function was actually developed as a counterexample of that. At least Scott Aaronson says so. Of course, they were believed to be sufficient for computation purposes precisely because all/most functions used in math were/are of that kind. Pcap ping 12:29, 18 September 2009 (UTC)[reply]

Aaronson doesn’t say who those mathematicians were who “thought that ‘computable’ coincided with a technical notion called ‘primitive recursive’”, so we cannot determine if this is a true statement. Here are two prominent counterexamples. As noted above, for purposes of his 1931 Goedel “invented” what we think of nowadays as “the primitive recursion schema”, and Goedel admitted he didn’t think they were sufficient (cf Davis’s commentary in Davis 1965:40). Hilbert also included “recursion” in his formalism. But was it sufficient? As is discussed in Kleene 1952: "Are there recursions which are not reducible to primitive recursion, and in particular can recursion be used to define a function which is not primitive recursive? This question arose from a conjecture of Hilbert 1926 on the continuum problem, and was answered by Ackermann 1928". For alot of historical detail after Ackermann's discovery see Kleene's §55. General recursive functions in Kleene 1952:270-276. Kleene considers his primitive recursive schema of his §43 (it’s the one you find in Rogers 1987 on page 6) nothing but definition by mathematical induction (cf top of page 274). Bill Wvbailey (talk) 14:34, 18 September 2009 (UTC)[reply]

Kronecker?[edit]

Didn't Kronecker invent these? People keep saying it's Godel.Likebox (talk) 16:01, 31 December 2009 (UTC)[reply]

WP:CS 71.139.12.24 (talk) 16:37, 31 December 2009 (UTC)[reply]
It's not clear who exactly first invented/discovered/described "recursion" as in: using a previous calculation in a subsequent calculation in the manner of mathematical induction. It appears in the late 1800's in Peano's axiom-based, "recursive" definitions of addition and multiplication. So maybe Kronecker had something to do with this "reuse" notion (as in: formal inductive proof), but I suspect this goes back to the ancient Greeks. I believe Goedel was the first to adopt/describe (primitive) recursion formally in his "incompleteness" proofs, but the general notion can be found in Hilbert's lectures of a few years before. Subsequently (after the problem of the Ackermann function appeared), Goedel was responsible for describing "general" or "full recursion" in his Princeton lectures, but this came from a suggestion by Herbrand who was working with Hilbert. With respect to a theory that would allow calculating anything calculable, throughout his life Goedel had no use for lambda calculus and remained suspicious of recursion in any form ("primitive" or "general" i.e. with the added mu-operator) and eventually rejected the idea in favor of Turing-like mechanical processes. BillWvbailey (talk) 16:58, 31 December 2009 (UTC)[reply]
Thanks--- I understand. But I think there was a precise moment at which primitive recursion first appears in the literature, and I think it is in a work of Kroneker in the late 1890s. I might be wrong--- I vaguely recall this from years ago. It might have been Peano, or some other of the early logicians. I think that Godel does not attribute his formal definitions of primitive recursion in his 1931 paper because it is common knowledge, not because it is new. I don't want to screw up the priority.Likebox (talk) 17:15, 31 December 2009 (UTC)[reply]
I did some research by querying both my cc's derived from googlebooks (I've OCR-converted them to searchable text), and then querying googlebooks directly. Dedekind in his (1903) Essays on the Theory of Numbers uses the word p. 33 in a 1901 translation: ". . .definition by induction (or recursion)"; Peano cited Dedekind but also a host of logicians. I did not find it in my OCR'd cc's of Jevons, Pierce/Ladd-Franklin, Boole, Cantor or Couturat. But via the query with googlebooks I did find it in (i) J. L. Raabe's (1857) Mathematische Mittheilungen, (ii) Adam Rittern von Burg's (1851, 1836?) Compendium der hoeheren Mathematick, and in (iii) Kaiserl's 1848 Stizungsberichte der Mathematish-Naturwissenscheaftliche Classe. I can't read German but the context where the word is used is clearly one of iteration of formulas. There's a Latin usage in an 1899 edition of Pope Sylvester II's (972-1003) Opera Mathematica where the word "recursione" appears to be used in the context of the repeating seasons of the year: " . . . qualiter naturali alternatione et annus recursione nunc jovianis orbem aspirat temperametis . . .". So at least we know where that word came from originally.
The word "primitive" was tacked on after Goedel and Herbrand came up with their more-general recursion; Goedel in his incompleteness proofs used the word "recursiv". If I can trace the usage back further than Kaiserl 1848 and von Burg 1851/1836? I'll add it here(is von Burg a 2nd edition? Unclear since I can't read the introductions as to what is going on here). [Leopold Kronecker (December 7, 1823 – December 29, 1891)]. Bill Wvbailey (talk) 23:26, 31 December 2009 (UTC)[reply]
The word is also spelled (in German) "rekursion", and it has the usage "rekursiv". But google books doesn't produce any earlier results than those listed above. I noticed that the word was used in reference to integration (e.g. as in accumulating finite elements) and this makes me wonder if something could be traced back to e.g. Newton. Bill Wvbailey (talk) 16:56, 1 January 2010 (UTC)[reply]
From the book: Charles Davies, William Guy Peck, 1859, Mathematical dictionary and cyclopedia of mathematical science, A. S. BARNES & BURR, 51 & 53 JOHN STREET, NEW YORK
"There is a mathematical process of demonstration which possesses somewhat the character of induction, inasmuch as a general truth is gathered from the examination of particular cases, but it differs from it inasmuch as each successive case is made to depend upon the preceding one. This process has been called the process of successive induction. (p. 310)
Bill Wvbailey (talk) 17:54, 1 January 2010 (UTC)[reply]
from: Florian Cajori (1893, 1919) ‘’A History of Mathematics’’, The Macmillan Company, London. Note the use of the word "recurrent" that I have boldfaced [the following was derived from OCR'd text and has not been doublechecked for italics etc]:
“To Maurolycus has been ascribed also the discovery of the inference by mathematical induction. It occurs in his introduction to his Opuscula mathematica, Venice, 1575. Later, mathematical induction was used by Pascal in his Traite du triangle arithmetique (1662). Processes akin to mathematical induction, some of which would yield the modem mathematical induction by introducing some slight change in the mode of presentation or in the point of view, were given before Maurolycus. Giovanni Campana (latinized form, Campanus) of Novara in Italy, in his edition of Euclid (1260), proves the irrationality of the golden section by a recurrent mode of inference resulting in a reductio ad absurdum. But he does not descend by a regular progression from n to n- 1, n- 2, etc., but leaps irregularly over, perhaps, several integers. Campano's process was used later by Fermat. A recurrent mode of inference is found in Bhliskara's "cyclic method" of solving indeterminate equations, in Theon of Smyrna (about 130 A. D.) and in Proclus's process for finding numbers representing the sides and diagonals of squares; it is found in Euclid's proof (Elements IX, 20) that the number of primes is infinite.” (p. 142)
"In [De Morgan’s] article "Induction (Mathematics)," first printed in 1838, occurs, apparently for the first time, the name "mathematical induction"; it was adopted and popularized by I. Todhunter, in his Algebra. The term "induction" had been used by John Wallis in 1656, in his Arithmetica infinitorum; he used the "induction" known to natural science. In 1686 Jacob Bernoulli criticLed him for using a process which was not binding logically and then advanced in place of it the proof from n to n+1. This is one of the several origins of the process of mathematical induction. From Wallis to De Morgan, the term "induction" was used occasionally in mathematics, and in a double sense, (I) to indicate incomplete inductions of the kind known in natural science, (2) for the proof from n to n+ 1. De Morgan's "mathematical induction" assigns a distinct name for the latter process. The Germans employ more commonly the name "vollstandige Induktion," which became current among them after the use of it by R. Dedekind in his Was sind und was solten die ZaJUen, 1887.” (p. 331)
I think this should do it, from the historical perspective. Bill Wvbailey (talk) 18:23, 1 January 2010 (UTC)[reply]

Primitive recursive, recursive, and computable[edit]

The term "recursive function" is defined as "total computable function", so the class of recursive functions is not identical to the class of computable functions (the class of [μ-recursive functions is, however). Furthermore, there is nothing surprising about the fact that the primitive recursive functions are a strict subset of the computable functions, since all p.r. functions are total and the class of computable functions contains at least one function which is not total. That is the reason why I changed the introduction to express more clearly that the class of p.r. functions does not constitute all recursive functions.

Also, please don't revert a complete edit if you only disagree with a small part of it. Thanks. – Adrianwn (talk) 05:05, 13 July 2010 (UTC)[reply]

Reversion is proper protocol. It forces the discussion to the talk page. Until this is hammered out here, I will revert your edit. BillWvbailey (talk) 15:19, 13 July 2010 (UTC)[reply]
My objection was not against a revert in general. You reverted an edit of mine which contained more than just the part which you disagree with; accordingly you should have only reverted that part of it, not everything. Let me put emphasis on my sentence above: please don't revert a complete edit if you only disagree with a small part of it. (edit in question) – Adrianwn (talk) 16:20, 13 July 2010 (UTC)[reply]
As I pointed out on a different talk page, "recursive function" and "computable function" are complete synonyms in computability theory. There is no difference in meaning. — Carl (CBM · talk) 10:51, 13 July 2010 (UTC)[reply]
Carl, do you agree with the edit? I don't. I agree with what you wrote above: "(u)-recursive function" is identical to "computable function". This is my understanding of the words:
  • "5.8 Theorem. All recursive functions are abacus computable (and hence Turing computable)" (Boolos-Burgess-Jeffrey 2002:61).
The adjective "Total" restricts the class of computable (aka recursive) functions to those that terminate for all integers in their defined domain; if they are to terminate, partial functions require restricted domains (cf B-B-J:7), e.g. for termination the (general, or u-)recursive function "division" disallows "division by 0", over the integers unrestricted subtraction p - q fails when q > p cf B-B-J:69 problem 7. So to make "subtraction" total, and coincidentally primitive recursive, it must be modified to become "proper" subtraction, etc.
  • Primitive recursive functions lack the u-operator. They are a "subclass" of recursive functions (B-B-J:63).
  • "The class of recursive functions is identical to the class of partial recursive functions" (cf Hopcrof-Ulmann 1979:175 problem 7.6.)
  • "Every primitive recursive function is a total recursive function" (cf Hopcroft-Ulmann 1979:175 problem 7.7 a).
  • "Adding the minimization operator [to the primitive recursion operators]. . . yields all partial recursive functions." (cf Hopcrof-Ulmann 1979:175 problem 7.6.)(cf Hopcrof-Ulmann 1979:175 problem 7.7.c)

BillWvbailey (talk) 15:19, 13 July 2010 (UTC)[reply]

They did not give the word "computable" a definite meaning in my schooling, so I would prefer that we restrict ourselves to "partial recursive", "total recursive", and "primitive recursive". "Recursive" alone is subject to confusion as to which of these it means.
To Adrianwn: If you do not want your whole edit reverted (and there are some good things in it), then I suggest that you break it down into parts, putting the non-controversial changes in one edit and the ones likely to be reverted into another. JRSpriggs (talk) 16:50, 13 July 2010 (UTC)[reply]
There is a similar discussion on the talk page of Ackermann function. It turns out that "recursive" is not a universally defined term. I never disputed that "computable" is identical to "µ-recursive"; whether "recursive" is identical to "computable" or to "total computable" is a matter of taste and depends on the author (I can give you several citations). For this reason, I agree with JRSpriggs that it should be unambiguated, for example: "total µ-recursive function".
As I explained above, the interesting thing about primitive recursive functions is not that they are a strict subset of all computable functions (which is trivial since these include non-total functions), but that they are a strict subset of the total computable functions, which I wanted to express with my edit.
JRSpriggs, although I agree with you regarding the size of edits in general, I don't like it when the history is cluttered with a myriad of small changes, and at the time I did not think my edits would be that controversial. – Adrianwn (talk) 19:54, 13 July 2010 (UTC)[reply]
Am confused. At least since Kleene 1952 and then later in his 1967 the word has been well-defined. Kleene 1967 starts here:
"Similarly, if there is a computation procedure for a function, we call the function computable" (p. 228)
where he has previously defined "computation procedure" as follows:
"...just as we may have a decision procedure or algorithm for a countably infinite class of questions each calling for a "yes" or "no" answer, we may have a computation procedure or algorithm for a countably infinite class of questions which require as answer the exhibiting of some object" (p. 226)
Kleene admits that this "intuitive notion of computation procedure . . . is vague" (p. 231) but then he asserts that the notion has been presented in "exact terms" -- "Most mathematical logicians agree that it was found in 1935 (and published in 1936), as we shall see in the next section [§41. Turing machines, Church's Thesis]" (p. 231-232).
Over the years, a number of authors have shown the equivalence of (general or u-)recursion to Turing machines (and Post-Turing machines and Register machines and abacus machines (aka counter machines). My favorite is B-B-J 2002:
"In the preceding several chapters we have introduced the intuitive notion of effective computability, and studied three reigorously defined technical notions of computability: Turing computabilitym, abacus computability, and recursive computability, noting along the way that any funtion that is computable in any of these technical senses is computable in the intuitive sense. We have also proved that all recursive functions are abacus computable and that all abacus computable functions are Turing computable. In this chapter we show that all Turing-computable functions are recursive, so that all three notions of computability are equivalent" (p. 88, introduction to Equivalent Defintions of Computability")
Minsky 1967 does something similar. Kleene did it way back in 1952. My engineering-grad text by Stone uses the Turing machine as the fundamental model of computation. In the Index of Enderton 2nd edition 2002 "Computable Functions, ..., see also Recursive functions" (cf pages 208-209 in particular where he formally defines "recursive"). Etc. BillWvbailey (talk) 20:24, 13 July 2010 (UTC)[reply]
I only knew "recursive functions" as "total µ-recursive functions"; it is only now that I learned that others might use a different definition. See also recursive set, which has a total computable characteristic function. – Adrianwn (talk) 21:41, 13 July 2010 (UTC)[reply]

[unindent] What do you think about the following version of the first paragraph:

"The primitive recursive functions are defined using primitive recursion and composition as central operations and are a strict subset of the total µ-recursive functions (µ-recursive functions are also called partial recursive). The term was coined by Rózsa Péter."

This way,

  • there is no confusion over whether "recursive" implies "partial" or "total" (the exact meaning depends on the author)
  • it is emphasized that the p.r. functions do not comprise all total (computable) functions
  • the close relationship between p.r. and µ-recursive is stressed

Adrianwn (talk) 15:55, 19 July 2010 (UTC)[reply]

If no one objects, I will make those changes in a couple of days. – Adrianwn (talk) 07:13, 22 July 2010 (UTC)[reply]

Definition[edit]

This article suffers from the same exasperating flaw just about every mathematics wikipedia article has. The authors seem incapable of understanding what a definition is! For Pete's sake, give a definition! These articles are incomprehensible unless you already know the material. Our most eminent physicist, Richard Feynman, once famously said that "if you can't explain something such that a freshman can understand it, it's because you don't understand it yourself." It appears that most of our wikipedia mathematics contributors are really clever crunching numbers, but they don't really understand the fundamental meaning of what they are doing. If they did, they could explain it! — Preceding unsigned comment added by 98.170.193.226 (talk) 02:33, 25 June 2012 (UTC)[reply]

There is a definition given. If you have a problem with it, then say what the problem is! Non-specific complaints are not helpful. JRSpriggs (talk) 05:34, 25 June 2012 (UTC)[reply]

The problem is that the article is laid out in a very confusing way, and uses unnecessarily complex language (perhaps in an attempt to be both precise and concise at the same time, but it ends up being very confusing for the uninitiated). I have a few suggestions:

The opening paragraph provides a definition whose meaning is unclear because it relies on terms that are seemingly circular (i.e. that a primitive recursive function is defined using primitive recursion is probably something that I could have guessed to start with, but given that at this point I have no idea what exactly primitive recursion is, it isn't exactly helpful). I would combine the first two paragraphs as follows:

In computability theory, the primitive recursive functions are a class of function (a strict subset of the total µ-recursive functions or partial recursive functions) that form an important building block on the way to a full formalization of computability and are important in proof theory. They are defined using a short list of ways of composing functions and a particular form of recursion known as primitive recursion. The term was coined by Rózsa Péter.

This has several advantages:

  • Follows WP:MOS by introducing the field before defining the concept.
  • Provides a simple idea of the kind of thing they are before going on to provide more detail
  • Use of bold indicates that primitive recursion will be defined in the article (rather than just mentioning it which might make a reader assume they are expected to know what it is already)

Next, I would change the definition section. The use of the term n-ary function can be confusing, particularly where n is replaced with a complex expression; I would drop it and replace it with the phrase a function accepting n arguments. I would also drop some of the more esoteric language that is not necessary to understanding the definition and use common explanations (perhaps introducing the phrase in brackets afterwards, if it is a particularly important one or will be used again in the rest of the article), so something like:

The primitive recursive functions are functions which take some number (potentially zero) of natural numbers (nonnegative integers) {0, 1, 2, ...} as arguments and produce natural numbers as their results.
The basic primitive recursive functions are given by these axioms:
  1. Constant function: The constant function 0 is primitive recursive.
  2. Successor function: The single-argument successor function S, which returns the successor of its argument (see Peano postulates), is primitive recursive. That is, S(k) = k + 1.
  3. Projection function: For every n≥1 and each i with 1≤in, the projection function Pin, which takes n arguments and returns its i-th argument, is primitive recursive.

and so on. I'm not sure if the reference to the Peano postulates should stay or not -- it is not essential to understanding the subject, but is not particularly harmful, either. I hope you see that this is a clearer way of phrasing the definition, which while not as brief as the original avoids the use of notations or terminology the reader may be unfamiliar with and is therefore easier for a novice to read. JulesH (talk) 09:13, 2 September 2013 (UTC)[reply]

I agree that your suggestion is much better than the existing article. But, in my opinion, the first sentence is yet too technical. I would suggest:
"In computability theory, the primitive recursive functions are, roughly speaking, the functions that may be defined and computed by a computer program made up from the arithmetic operations, the if then else control statement and the for i = 1 to n loop (excluding the while loop with a number of iterations that are is not known when entering in the loop). The primitive recursive functions form an important building block on the way to a full formalization of computability and are important in proof theory. Historically, and for allowing simpler proofs, they are formally defined using a short list of ways of composing functions and a particular form of recursion known as primitive recursion. The term was coined by Rózsa Péter."
As almost everybody knows the bases of computer programming this would make the first paragraph understandable for a much wider audience. This characterization of the primitive recursive functions has also the advantage to allow a non specialist to recognize primitive recursive functions by himself. D.Lazard (talk) 17:02, 2 September 2013 (UTC)[reply]
If you want to appeal to the man from the street. You could explain primitive recursive as LOOP-programs, and general recursive as WHILE-programs. But I guess Computability theory, also known as recursion theory hasn't taken this road. An early mention of primitive recursive was in Dedekind 1888.
Jan Burse (talk) 13:05, 28 February 2024 (UTC)[reply]

If either of you make any changes to the article, please make them incrementally: make a small (localized) change, wait a few days to see what the reaction is, make another small change, wait a few more days, etc.. This way I or others can correct any errors you introduce instead of having to revert you wholesale. JRSpriggs (talk) 02:46, 3 September 2013 (UTC)[reply]

symbol for proper subtraction (#Some common primitive recursive functions )[edit]

I'm no expert here, but shouldn't the symbol for proper subtraction be ⊥ (U+22A5, 'Up tack') or ⟂ (U+27C2, 'perpendicular'), not ┴ (U+2534, 'Box drawing light up and horizontal', currently used)? I don't expect box drawings be used as a mathematical expression... -- あるうぃんす (talk) 14:45, 14 September 2013 (UTC) (sorry about my username!)[reply]

Actually the symbol I have seen is similar to except that the dot below the line is removed. JRSpriggs (talk) 09:26, 15 September 2013 (UTC)[reply]

---

Yes, this symbol ∸ , derived from Arial Unicode MS symbol code: 2238 Unicode (Hex). Bill Wvbailey (talk) 20:54, 15 September 2013 (UTC)[reply]
Thanks to Wvbailey for the character. I put it into the article. JRSpriggs (talk) 09:16, 16 September 2013 (UTC)[reply]

More intuitive definition ?[edit]

I found the following on [wolfram](http://mathworld.wolfram.com/PrimitiveRecursiveFunction.html): "As first shown by Meyer and Ritchie (1967), do-loops (which have a fixed iteration limit) are a special case of while-loops. A function that can be implemented using only do-loops is called primitive recursive. (In contrast, a computable function can be coded using a combination of for- and while-loops, or while-loops only.) Examples of primitive recursive functions include power, greatest common divisor, and p_n(the function giving the nth prime). "

This definition might just be a name clash, but if it is equivalent, it does get a point across much more quickly and more layman-friendly. But I am reluctant to edit the article as I do not have the expertise. What is your opinion JSpriggs? I don't think we should remove information, but if this 'primitive recursive function' is the same as that 'primitive recursive function' then we should mention that early on. Also, if this definition is added, we should elaborate what we mean by fixed iteration limit more precisely. JimCrayne (talk) 02:58, 2 February 2015 (UTC)[reply]

This is not a name clash, and the two definitions are equivalent, a soon as one has proved that the programming language which is used computes primitive functions (Church thesis). More precisely, if, for each loop of the program, there is a primitive function which can upper bound the number of iterations before entering the loop, then the function computed by the program is primitive. There is a related theorem, which is also rather intuitive, and may help to understand complexity theory: A recursive function is primitive if and only if its computation time is upperbounded by a primitive function of the size of the input. The relationship is the following: Starting for a program, compute first a bound B of the execution time; then transform all while loops into do loops of the form "for i = 1 to B do". The practical consequence is that, as soon as you have computed the worst-case complexity of an algorithm, you know that it computes a primitive function. IMO, the article should insist on these results, as, except for specialists of recursive functions, there are the main properties of primitive recursive functions that deserve to be known by mathematicians and computer scientists. I have learn this a long time ago in a book, the name of which I do not remember. Therefore editing the article accordingly would take too much time for me. Note also that I have suggested before (#Definition) a modification of the lead that is similar to this. D.Lazard (talk) 10:10, 2 February 2015 (UTC)[reply]
Yes, I think you two have the right idea of what a primitive recursive function is. However, I would rather that you make the change than that I make it because I am happy with the present version. You can change it to make it clear to yourselves and then I will make sure that it is still technically correct. JRSpriggs (talk) 04:24, 3 February 2015 (UTC)[reply]

List of p.r. functions[edit]

The numbered list of functions from Addition to Logarithm had several inconsistencies and typos and unnecessary elements; I fixed several of them as a minor edit. This last one I left as a major edit because I'm not at all sure if I've preserved the original intent -- the original text

"b divides a" [ a | b ]: If the remainder ( a, b )=0 then [ a | b ] else b does not divide a "evenly"

isn't even grammatically sensible, and seems to be confused about which of a and b is the potential divisor. I rewrote it to

a | b: (a divides b): If b=k×a for some k then 0 else 1

under the assumption that it was intended to be the (t=0,f=1) representation function for the usual "a divides b" predicate.

There are two (evidently different) functions both written as a×b in this list. Perhaps this needs addressing.

I left the final item

lo(x, y): logarithm of x to the base y

untouched, but since "logarithm" is understood by default to be real-valued, this needs a clarification of what it means. My best guess would be that lo(576, 2) = 6, which makes it similar to, but not identical to, the (a)i function earlier in the list. Joule36e5 (talk) 06:43, 6 April 2015 (UTC)[reply]

(Act II, five years later.) I finally looked it up for myself, and my guess was correct; I've edited the logarithm item to clarify it. Joule36e5 (talk) 09:59, 5 October 2020 (UTC)[reply]

Computable function that is not primitive recursive[edit]

In the introduction we read:

In fact, it is difficult to devise a computable function that is not primitive recursive.

Actually I think it is quite easy: Each non-total computable function is not primitive recursive since primitive recursive functions are total. So should we change it to the following?

In fact, it is difficult to devise a total computable function that is not primitive recursive.

--Jobu0101 (talk) 11:13, 22 April 2015 (UTC)[reply]

Good point. I changed "computable" to "total recursive" in that sentence. JRSpriggs (talk) 04:56, 23 April 2015 (UTC)[reply]

Floop issue.[edit]

"Floop is an example, as are almost all real-world computer programming languages." You mean an example of a Turing complete language? I know you want to talk about Floop here (maybe to contrast it with bloop?), but you've added this aweful sentence just to get that in. It doesn't need a whole sentence, and especially not one as distracting as the one you've placed here. You should maybe just sneak it in to the previous sentence, or even leave it out. If you have to include it, you could use a formation like "of which floop is an example, as are almost all real world programming languages". If you _really_ want to contrast PR languages with TC languages, you could devote a whole section to that. But if you want to sneak in a mention of Floop, please make it sneaky enough that it doesn't distract the reader.

Anyway, Floop is basically just a reference. If you really want Floop in there, consider what Floop is doing there. Bloop is actually primitive recursive, so for illustrating what PR languages are like, Bloop could probably do a great job on its own. So for a mention of Floop to make sense, it's really got to be doing some work. Remember, your job is to help the reader understand what primitive recursion is. Anything you put in there that doesn't contribute significantly to that goal will only distract the reader. So either make Floop do some actual work - or - if it's not pulling its weight, just give it the axe. Comiscuous (talk) 04:46, 7 April 2021 (UTC)[reply]

Who is you? Is that a reference to a discussion that can be found elsewhere, or simply the editor who has added this sentence to the article?
In any case, I have removed the mention of Floop, and rewritten a large part of the section. However, some more work is still needed, as this rewritten part contents general properties of primitive recursive functions, that are independent of any programming language. D.Lazard (talk) 09:25, 7 April 2021 (UTC)[reply]
'You' is the editor who brought in Floop in that paragraph, and I was presuming that he or she would somehow encounter the 'Floop issue' talk section that I created. My bad. I forgot about how Wikipedia works and the fact that editors have limited attention and cannot be notified of everything that happens in a talk page of an article they've edited.
Seems to be about BlooP and FlooP. 67.198.37.16 (talk) 18:31, 11 February 2024 (UTC)[reply]

All the "what for" are there[edit]

Removed the calls for "what for". I find:

Section: Use in first-order Peano arithmetic
The "what for" is the first paragraph of the section.

Section: Elimination of parameters
The "what for" are the last two paragraphs of the section.

Jan Burse (talk) 08:33, 28 February 2024 (UTC)[reply]

Wikipedia is an encyclopedia, and requires therefore explanations. In particular, the purpose of a section must be clear to non-specialists, even if understanding the technical details requires more competence. For the moment, the purpose of these two sections is totally opaque. As an example, the sentence "Thus in order to define primitive recursive functions one has ..." is totally confusing, as suggesting that the definitions that precede are incorrect.
Even the section titles are totally confusing. What does mean § Use in first-order Peano arithmetic? Why not a section "Use in ZFC? Moreover, as far as I know there is only one Peano arithmetic, not a first-order one and a non-first order one.
What are the "parameters" referred to by the section title § Elimination of parameters, and not referred to in the body of the section? I guess that the section is about defining primitive recursive functions without using multivariate functions, but I am unable to verify this guess.
Moreover, these sections are not sourced. This could be a reason for removing them as WP:OR.
So, I have reverted your tag removals. Please do not remove them again without addressing the issues of these sections. D.Lazard (talk) 10:08, 28 February 2024 (UTC)[reply]
According to Second-order_arithmetic there is more than one PA. Which might be contextually relevant, since the principia used higher order logic, as noted by Gödel 2.1 Definitions https://hirzels.com/martin/papers/canon00-goedel.pdf
Jan Burse (talk) 12:33, 28 February 2024 (UTC)[reply]
Nobody uses the terminology "multivariate" in recursion theory. The term n-ary means a function with n-arguments. The Cantor Pairing_function can be used to encode two or more arguments in one argument, and its also possible to reverse the process. The function is itself primitive recursive, so are its projections.
Jan Burse (talk) 23:02, 28 February 2024 (UTC)[reply]

About the used references in the article. Instead of Rose as reference for Elimination of Parameters, its seems you could also use Moschovakis 1994, ch. 5. Or as SEP has put it, they show the other way around from h, to h2 and then to h1, but then have the following remark:

Here the first k arguments x0,…,xk−1 to g are the parameters, the k+1st argument y is the recursion variable, and the k+2 nd argument h(x0,…,xk−1,y) gives the prior value of h. An elementary set theoretic argument shows that for each k∈N, if f is k-ary and g is k+2-ary, then a there is a unique k+1-ary function h satisfying (18)—see, e.g., (Moschovakis 1994, ch. 5).

https://plato.stanford.edu/entries/recursive-functions/#PrimRecuFuncPR Maybe their distinction between parameters and recursion variable, is also the better terminology. But the SEP article uses a different argument ordering.

Jan Burse (talk) 11:44, 28 February 2024 (UTC)[reply]

You do not adress the main issue of these sections. Wikipedia is not for summarizing some specific articles, relying on these articles to know author's definitions of the technical terms. The aim of an encyclopedy is to present the most important results in a language that is compatible with the most common terminology, and is accessible to (in the case of this article) a undergraduate student in mathematics. It must also be clear why a result is important enough to belong to the article (per WP:UNDUE). Neither the sections under discussion nor your above comments satisfy these requisites. D.Lazard (talk) 14:12, 28 February 2024 (UTC)[reply]
To reduce complexity of the article I would split off Gödel, and I would also split off LOOP/WHILE. Devote separate articles to the two, would make the primitive recursive article more readable and less cluttered with applications of the notion primitive recursive in Gödels theorem and computational devices such as LOOP, that show primitive recursive behaviour.
Jan Burse (talk) 22:41, 28 February 2024 (UTC)[reply]

I have deleted the first disputed section, and move the second disputed section to here:

Gödel's β function

There I would also like to add a reference to this:

PRIMITIVE RECURSIVE FUNCTIONS by
Zéev Rosenfeld - March 1959
Department of Mathematics McGill University Montreal.

But I don't know how to edit and add a reference.

Jan Burse (talk) 22:51, 28 February 2024 (UTC)[reply]