Aim: Data Preprocessing using scikitLearn python library
Theory: Nowadays, the data collection task becomes easier, you can use a wide range of sensors to capture the data from machines or you can send a survey form to gather data on user opinions or no need to do even that, you can download a huge set of data from sites like Kaggle for your experimentation. But wait, the readiness of the data we collect in those ways are often less to put into the analysis right away. You need to preprocess the data and make it fit for analysis. Data Preprocessing is the first step in starting to work with data, where data scientists spend most of their time! Data Preprocessing is a technique that is used to convert the raw data into a clean data set.
Steps Involved in Data Preprocessing:
1. Data Cleaning:
The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling of missing data, noisy data etc.
- (a). Missing Data:
This situation arises when some data is missing in the data. It can be handled in various ways.
Some of them are:- Ignore the tuples:
This approach is suitable only when the dataset we have is quite large and multiple values are missing within a tuple. - Fill the Missing values:
There are various ways to do this task. You can choose to fill the missing values manually, by attribute mean or the most probable value.
- Ignore the tuples:
- (b). Noisy Data:
Noisy data is a meaningless data that can’t be interpreted by machines. It can be generated due to faulty data collection, data entry errors etc. It can be handled in following ways :- Binning Method:
This method works on sorted data in order to smooth it. The whole data is divided into segments of equal size and then various methods are performed to complete the task. Each segmented is handled separately. One can replace all data in a segment by its mean or boundary values can be used to complete the task. - Regression:
Here data can be made smooth by fitting it to a regression function. The regression used may be linear (having one independent variable) or multiple (having multiple independent variables). - Clustering:
This approach groups the similar data in a cluster. The outliers may be undetected or it will fall outside the clusters.
- Binning Method:
2. Data Transformation:
This step is taken in order to transform the data in appropriate forms suitable for mining process. This involves following ways:
- Normalization:
It is done in order to scale the data values in a specified range (-1.0 to 1.0 or 0.0 to 1.0) - Attribute Selection:
In this strategy, new attributes are constructed from the given set of attributes to help the mining process. - Discretization:
This is done to replace the raw values of numeric attribute by interval levels or conceptual levels. - Concept Hierarchy Generation:
Here attributes are converted from level to higher level in hierarchy. For Example-The attribute “city” can be converted to “country”.
3. Data Reduction:
Since data mining is a technique that is used to handle huge amount of data. While working with huge volume of data, analysis became harder in such cases. In order to get rid of this, we uses data reduction technique. It aims to increase the storage efficiency and reduce data storage and analysis costs.
The various steps to data reduction are:
- Data Cube Aggregation:
Aggregation operation is applied to data for the construction of the data cube. - Attribute Subset Selection:
The highly relevant attributes should be used, rest all can be discarded. For performing attribute selection, one can use level of significance and p- value of the attribute. The attribute having p-value greater than significance level can be discarded. - Numerosity Reduction:
This enable to store the model of data instead of whole data, for example: Regression Models. - Dimensionality Reduction:
This reduce the size of data by encoding mechanisms. It can be lossy or lossless. If after reconstruction from compressed data, original data can be retrieved, such reduction are called lossless reduction else it is called lossy reduction. The two effective methods of dimensionality reduction are: Wavelet transforms and PCA (Principal Component Analysis).
The dataset has 12 columns related to passenger details, which are,
- PassengerId: Passenger’s unique ID
- Survived: Survival status of the passengers (0 = No; 1 = Yes)
- Pclass: Passenger class (1 = First; 2 = Second; 3 = Third)
- Name: Passenger’s name
- Sex: Sex of the Passenger
- Age: Age of the Passenger
- SibSp: Number of siblings/spouses aboard
- ParCh: Number of parents/children aboard
- Ticket: Ticket number
- Fare: Passenger fare
- Cabin: Cabin
- Embarked: Port of embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
- Null values - This shows the summation of null values in each columns.
- Data cleaning - This clears unwanted data.
- Handling Missing values - This handles missing values from each columns.
- Encoding Categorical features - This handles categorical labels.
No comments:
Post a Comment