翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

random forest : ウィキペディア英語版
random forest

Random forests is a notion of the general technique of random decision forests〔〔 that are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set.
The algorithm for inducing Breiman's random forest was developed by Leo Breiman and Adele Cutler,〔 and "Random Forests" is their trademark.〔U.S. trademark registration number 3185828, registered 2006/12/19.〕 The method combines Breiman's "bagging" idea and the random selection of features, introduced independently by Ho and Amit and Geman in order to construct a collection of decision trees with controlled variance.
The selection of a random subset of features is an example of the random subspace method, which, in Ho's formulation, is a way to implement the "stochastic discrimination" approach to classification proposed by Eugene Kleinberg.〔 〕
== History ==
The general method of random decision forests was first proposed by Ho in 1995,〔 who established that forests of trees splitting with oblique hyperplanes, if randomly restricted to be sensitive to only selected feature dimensions, can gain accuracy as they grow without suffering from overtraining. A subsequent work along the same lines 〔 concluded that other splitting methods, as long as they are randomly forced to be insensitive to some feature dimensions, behave similarly. Note that this observation of a more complex classifier (a larger forest) getting more accurate nearly monotonically is in sharp contrast to the common belief that the complexity of a classifier can only grow to a certain level before accuracy being hurt by overfitting. The explanation of the forest method's resistance to overtraining can be found in Kleinberg's theory of stochastic discrimination.〔〔〔

The early development of Breiman's notion of random forests was influenced by the work of Amit and
Geman〔 who introduced the idea of searching over a random subset of the
available decisions when splitting a node, in the context of growing a single
tree. The idea of random subspace selection from Ho〔 was also influential
in the design of random forests. In this method a forest of trees is grown,
and variation among the trees is introduced by projecting the training data
into a randomly chosen subspace before fitting each tree. Finally, the idea of
randomized node optimization, where the decision at each node is selected by a
randomized procedure, rather than a deterministic optimization was first
introduced by Dietterich.
The introduction of random forests proper was first made in a paper
by Leo Breiman.〔 This paper describes a method of building a forest of
uncorrelated trees using a CART like procedure, combined with randomized node
optimization and bagging. In addition, this paper combines several
ingredients, some previously known and some novel, which form the basis of the
modern practice of random forests, in particular:
# Using out-of-bag error as an estimate of the generalization error.
# Measuring variable importance through permutation.
The report also offers the first theoretical result for random forests in the
form of a bound on the generalization error which depends on the strength of the
trees in the forest and their correlation.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「random forest」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.