Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
There are several feature selection techniques that can be used for classification and clustering, including:
Wrapper methods: These methods use a specific learning algorithm to evaluate the importance of each feature. Examples include forward selection and backward elimination.
Filter methods: These methods use a statistical test to evaluate the importance of each feature. Examples include chi-squared test and mutual information.
Embedded methods: These methods use a learning algorithm that has built-in feature selection capabilities. Examples include Lasso and Ridge regression in linear models.
Hybrid methods: These methods combine the strengths of wrapper and filter methods.
Correlation-based feature selection (CFS): This method uses correlation between features and the target variable to select the relevant features.
Recursive Feature Elimination (RFE): This method recursively removing attributes and building a model on those attributes that remain. It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute.
Overall, the choice of feature selection technique will depend on the specific problem and dataset at hand.
The data mining tasks are often confronted with many challenges, biggest being the
large dimension of the datasets. For successful data mining, the most important criterion is the
dimensionality reduction of the dataset. The problem of dimensionality has imposed a very big
challenge towards the efficiency of the data mining algorithms. The data mining algorithms cannot
handle these high dimensional data as they render the mining tasks intractable. Thus,
it becomes necessary to reduce the dimensionality of the data.
There are two methods of dimensionality reduction. They are the feature selection
and feature extraction methods (Bishop, 1995, Devijver and Kittler, 1982, Fukunaga,
1990). Feature selection method reduce the dimensionality of the original feature space by
selecting a subset of features without any transformation. It preserves the physical
interpretability of the selected features as in the original space. Feature extraction method
reduce the dimensionality by linear transformation of the input features into a completely
different space. The linear transformation involved in feature extraction cause the features to be
altered, making their interpretation difficult. Features in the transformed space lose their
physical interpretability and their original contribution becomes difficult to ascertain (Bishop,
1995). The choice of the dimensionality reduction method is completely application
specific and depends on the nature of the data. Feature selection is
advantageous especially as features keep their original physical meaning because no
transformation of data is made. This may be important for a better problem understanding in some
applications such as text mining and genetic analysis where only relevant information is
analysed.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
There are several feature selection techniques that can be used for classification and clustering, including:
Wrapper methods: These methods use a specific learning algorithm to evaluate the importance of each feature. Examples include forward selection and backward elimination.
Filter methods: These methods use a statistical test to evaluate the importance of each feature. Examples include chi-squared test and mutual information.
Embedded methods: These methods use a learning algorithm that has built-in feature selection capabilities. Examples include Lasso and Ridge regression in linear models.
Hybrid methods: These methods combine the strengths of wrapper and filter methods.
Correlation-based feature selection (CFS): This method uses correlation between features and the target variable to select the relevant features.
Recursive Feature Elimination (RFE): This method recursively removing attributes and building a model on those attributes that remain. It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute.
Overall, the choice of feature selection technique will depend on the specific problem and dataset at hand.
The data mining tasks are often confronted with many challenges, biggest being the
large dimension of the datasets. For successful data mining, the most important criterion is the
dimensionality reduction of the dataset. The problem of dimensionality has imposed a very big
challenge towards the efficiency of the data mining algorithms. The data mining algorithms cannot
handle these high dimensional data as they render the mining tasks intractable. Thus,
it becomes necessary to reduce the dimensionality of the data.
There are two methods of dimensionality reduction. They are the feature selection
and feature extraction methods (Bishop, 1995, Devijver and Kittler, 1982, Fukunaga,
1990). Feature selection method reduce the dimensionality of the original feature space by
selecting a subset of features without any transformation. It preserves the physical
interpretability of the selected features as in the original space. Feature extraction method
reduce the dimensionality by linear transformation of the input features into a completely
different space. The linear transformation involved in feature extraction cause the features to be
altered, making their interpretation difficult. Features in the transformed space lose their
physical interpretability and their original contribution becomes difficult to ascertain (Bishop,
1995). The choice of the dimensionality reduction method is completely application
specific and depends on the nature of the data. Feature selection is
advantageous especially as features keep their original physical meaning because no
transformation of data is made. This may be important for a better problem understanding in some
applications such as text mining and genetic analysis where only relevant information is
analysed.