Drucker, H. 1997. Improving regressors using boosting techniques. In D. H. Fisher,
editor, Proceedings of the Fourteenth International Conference on Machine
Learning, Nashville, TN. San Francisco: Morgan Kaufmann, pp. 107–115.
Drummond, C., and R. C. Holte. 2000. Explicitly representing expected cost: An
alternative to ROC representation. In R. Ramakrishnan, S. Stolfo, R. Bayardo,
and I. Parsa, editors, Proceedings of the Sixth International Conference on
Knowledge Discovery and Data Mining, Boston, MA. New York: ACM, pp.
198–207.
Duda, R. O., and P. E. Hart. 1973. Pattern classification and scene analysis. New York:
John Wiley.
Duda, R. O., P. E. Hart, and D. G. Stork. 2001. Pattern Classification, second edition.
New York: John Wiley.
Dumais, S. T., J. Platt, D. Heckerman, and M. Sahami. 1998. Inductive learning algo-
rithms and representations for text categorization. In Proceedings of the ACM
Seventh International Conference on Information and Knowledge Management,
Bethesda, MD. New York: ACM, pp. 148–155.
Efron, B., and R. Tibshirani. 1993. An introduction to the bootstrap. London:
Chapman and Hall.
Egan, J. P. 1975. Signal detection theory and ROC analysis. Series in Cognition and
Perception. New York: Academic Press.
Fayyad, U. M., and K. B. Irani. 1993. Multi-interval discretization of continuous-
valued attributes for classification learning. In Proceedings of the Thirteenth
International Joint Conference on Artificial Intelligence, Chambery, France. San
Francisco: Morgan Kaufmann, pp. 1022–1027.
Fayyad, U. M., and P. Smyth. 1995. From massive datasets to science catalogs:
Applications and challenges. In Proceedings of the Workshop on Massive
Datasets. Washington, DC: NRC, Committee on Applied and Theoretical
Statistics.
Fayyad, U. M., G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, editors. 1996.
Advances in knowledge discovery and data mining. Menlo Park, CA: AAAI
Press/MIT Press.
Fisher, D. 1987. Knowledge acquisition via incremental conceptual clustering.
Machine Learning 2(2):139–172.
Fisher, R. A. 1936. The use of multiple measurements in taxonomic problems.
Annual Eugenics 7(part II):179–188. Reprinted in Contributions to
Mathematical Statistics, 1950. New York: John Wiley.
4 9 0
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 490
Fix, E., and J. L. Hodges Jr. 1951. Discriminatory analysis; nonparametric discrim-
ination: Consistency properties. Technical Report 21-49-004(4), USAF School
of Aviation Medicine, Randolph Field, Texas.
Flach, P. A., and N. Lachiche. 1999. Confirmation-guided discovery of first-order
rules with Tertius. Machine Learning 42:61–95.
Fletcher, R. 1987. Practical methods of optimization, second edition. New York: John
Wiley.
Fradkin, D., and D. Madigan. 2003. Experiments with random projections for
machine learning. In L. Getoor, T. E. Senator, P. Domingos, and C. Faloutsos,
editors, Proceedings of the Ninth International Conference on Knowledge
Discovery and Data Mining, Washington, DC. New York: ACM, pp. 517–522.
Frank E. 2000. Pruning decision trees and lists. PhD Dissertation, Department of
Computer Science, University of Waikato, New Zealand.
Frank, E., and M. Hall. 2001. A simple approach to ordinal classification. In L. de
Raedt and P. A. Flach, editors, Proceedings of the Twelfth European Conference
on Machine Learning, Freiburg, Germany. Berlin: Springer-Verlag, pp. 145–156.
Frank, E., and I. H. Witten. 1998. Generating accurate rule sets without global opti-
mization. In J. Shavlik, editor, Proceedings of the Fifteenth International
Conference on Machine Learning, Madison, WI. San Francisco: Morgan
Kaufmann, pp. 144–151.
———. 1999. Making better use of global discretization. In I. Bratko and S. Dze-
roski, editors, Proceedings of the Sixteenth International Conference on Machine
Learning, Bled, Slovenia. San Francisco: Morgan Kaufmann, pp. 115–123.
Frank, E., M. Hall, and B. Pfahringer. 2003. Locally weighted Naïve Bayes. In U.
Kjærulff and C. Meek, editors, Proceedings of the Nineteenth Conference on
Uncertainty in Artificial Intelligence, Acapulco, Mexico. San Francisco: Morgan
Kaufmann, pp. 249–256.
Frank, E., G. Holmes, R. Kirkby, and M. Hall. 2002. Racing committees for large
datasets. In S. Lange and K. Satoh, and C. H. Smith, editors, Proceedings of the
Fifth International Conference on Discovery Science, Lübeck, Germany. Berlin:
Springer-Verlag, pp. 153–164.
Frank, E., G. W. Paynter, I. H. Witten, C. Gutwin, and C. G. Nevill-Manning. 1999.
Domain-specific key phrase extraction. In Proceedings of the Sixteenth
International Joint Conference on Artificial Intelligence, Stockholm, Sweden.
San Francisco: Morgan Kaufmann, pp. 668–673.
Freitag, D. 2002. Machine learning for information extraction in informal domains.
Machine Learning 39(2/3):169–202.
R E F E R E N C E S
4 9 1
P088407-REF.qxd 4/30/05 11:24 AM Page 491
Freund, Y., and L. Mason. 1999. The alternating decision-tree learning algorithm.
In I. Bratko and S. Dzeroski, editors, Proceedings of the Sixteenth International
Conference on Machine Learning, Bled, Slovenia. San Francisco: Morgan
Kaufmann, pp. 124–133.
Freund, Y., and R. E. Schapire. 1996. Experiments with a new boosting algorithm.
In L. Saitta, editor, Proceedings of the Thirteenth International Conference on
Machine Learning, Bari, Italy. San Francisco: Morgan Kaufmann, pp. 148–156.
———. 1999. Large margin classification using the perceptron algorithm. Machine
Learning 37(3):277–296.
Friedman, J. H. 1996. Another approach to polychotomous classification. Technical
Report, Department of Statistics, Stanford University, Stanford, CA.
———. 2001. Greedy function approximation: A gradient boosting machine.
Annals of Statistics 29(5):1189–1232.
Friedman, J. H., J. L. Bentley, and R. A. Finkel. 1977. An algorithm for finding best
matches in logarithmic expected time. ACM Transactions on Mathematical
Software 3(3):209–266.
Friedman, J. H., T. Hastie, and R. Tibshirani. 2000. Additive logistic regression: A
statistical view of boosting. Annals of Statistics 28(2):337–374.
Friedman, N., D. Geiger, and M. Goldszmidt. 1997. Bayesian network classifiers.
Machine Learning 29(2):131–163.
Fulton, T., S. Kasif, and S. Salzberg. 1995. Efficient algorithms for finding multiway
splits for decision trees. In A. Prieditis and S. Russell, editors, Proceedings of
the Twelfth International Conference on Machine Learning, Tahoe City, CA. San
Francisco: Morgan Kaufmann, pp. 244–251.
Fürnkrantz, J. 2002. Round robin classification. Journal of Machine Learning
Research 2:721–747.
Fürnkrantz, J., and P. A. Flach. 2005. ROC ’n’ rule learning: Towards a better under-
standing of covering algorithms. Machine Learning 58(1):39–77.
Fürnkrantz, J., and G. Widmer. 1994. Incremental reduced-error pruning. In H.
Hirsh and W. Cohen, editors, Proceedings of the Eleventh International
Conference on Machine Learning, New Brunswick, NJ. San Francisco: Morgan
Kaufmann, pp. 70–77.
Gaines, B. R., and P. Compton. 1995. Induction of ripple-down rules applied to
modeling large databases. Journal of Intelligent Information Systems 5(3):211–228.
Genesereth, M. R., and N. J. Nilsson. 1987. Logical foundations of artificial intelli-
gence. San Francisco: Morgan Kaufmann.
4 9 2
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 492
Dostları ilə paylaş: |