Unique negative dimension

From Wikipedia, the free encyclopedia

Unique negative dimension (UND) is a complexity measure for the model of learning from positive examples. The unique negative dimension of a class of concepts is the size of the maximum subclass such that for every concept , we have is nonempty.

This concept was originally proposed by M. Gereb-Graus in "Complexity of learning from one-side examples", Technical Report TR-20-89, Harvard University Division of Engineering and Applied Science, 1989.[1][2][3]

See also[edit]

References[edit]

  1. ^ Darnstädt, Malte; Simon, Hans Ulrich; Szörényi, Balázs (30 January 2014). "Supervised learning and Co-training". Theoretical Computer Science. 519: 68–87. doi:10.1016/j.tcs.2013.09.020.
  2. ^ Geréb-Graus, Mihály (1989). Lower bounds on parallel, distributed and automata computations (Thesis). OCLC 1243704701. OSTI 5815133. TR-20-89.
  3. ^ Ehrenfeucht, Andrzej; Haussler, David; Kearns, Michael; Valiant, Leslie (1 September 1989). "A general lower bound on the number of examples needed for learning". Information and Computation. 82 (3): 247–261. doi:10.1016/0890-5401(89)90002-3. S2CID 1925579.