Data scientists spend a large amount of their time cleaning datasets and getting them down to a form with which they can work. In fact, a lot of data scientists argue that the initial steps of obtaining and cleaning data constitute 80% of the job. Show
Therefore, if you are just stepping into this field or planning to step into this field, it is important to be able to deal with messy data, whether that means missing values, inconsistent formatting, malformed records, or nonsensical outliers. In this tutorial, we’ll leverage Python’s Pandas and NumPy libraries to clean data. We’ll cover the following:
Free Bonus: that points you to the best tutorials, videos, and books for improving your NumPy skills. Here are the datasets that we will be using:
You can download the datasets from Real Python’s GitHub repository in order to follow the examples here. Note: I recommend using Jupyter Notebooks to follow along. This tutorial assumes a basic understanding of the Pandas and NumPy libraries, including Panda’s workhorse 9 and 5 objects, common methods that can be applied to these objects, and familiarity with NumPy’s 1 values.Let’s import the required modules and get started! >>>
Dropping Columns in a >>> df.head() Identifier Place of Publication Date of Publication \ 0 206 London 1879 [1878] 1 216 London; Virtue & Yorston 1868 2 218 London 1869 3 472 London 1851 4 480 London 1857 Publisher Title \ 0 S. Tinsley & Co. Walter Forbes. [A novel.] By A. A 1 Virtue & Co. All for Greed. [A novel. The dedication signed... 2 Bradbury, Evans & Co. Love the Avenger. By the author of “All for Gr... 3 James Darling Welsh Sketches, chiefly ecclesiastical, to the... 4 Wertheim & Macintosh [The World in which I live, and my place in it... Author Flickr URL 0 A. A. http://www.flickr.com/photos/britishlibrary/ta... 1 A., A. A. http://www.flickr.com/photos/britishlibrary/ta... 2 A., A. A. http://www.flickr.com/photos/britishlibrary/ta... 3 A., E. S. http://www.flickr.com/photos/britishlibrary/ta... 4 A., E. S. http://www.flickr.com/photos/britishlibrary/ta... 5Often, you’ll find that not all the categories of data in a dataset are useful to you. For example, you might have a dataset containing student information (name, grade, standard, parents’ names, and address) but want to focus on analyzing student grades. In this case, the address or parents’ names categories are not important to you. Retaining these unneeded categories will take up unnecessary space and potentially also bog down runtime. Pandas provides a handy way of removing unwanted columns or rows from a 5 with the 4 function. Let’s look at a simple example where we drop a number of columns from a 5.First, let’s create a 5 out of the CSV file ‘BL-Flickr-Images-Book.csv’. In the examples below, we pass a relative path to 7, meaning that all of the datasets are in a folder named 8 in our current working directory:>>>
When we look at the first five entries using the 9 method, we can see that a handful of columns provide ancillary information that would be helpful to the library but isn’t very descriptive of the books themselves: 0, 1, 2, 3, 4, 5 and 6.We can drop these columns in the following way: >>>
Above, we defined a list that contains the names of all the columns we want to drop. Next, we call the 4 function on our object, passing in the 8 parameter as 9 and the 0 parameter as 1. This tells Pandas that we want the changes to be made directly in our object and that it should look for the values to be dropped in the columns of the object.When we inspect the 5 again, we’ll see that the unwanted columns have been removed:>>>
Alternatively, we could also remove the columns by passing them to the 3 parameter directly instead of separately specifying the labels to be removed and the axis where Pandas should look for the labels:>>>
This syntax is more intuitive and readable. What we’re trying to do here is directly apparent. If you know in advance which columns you’d like to retain, another option is to pass them to the 4 argument of 7.Remove adsChanging the Index of a >>> df.head() Identifier Place of Publication Date of Publication \ 0 206 London 1879 [1878] 1 216 London; Virtue & Yorston 1868 2 218 London 1869 3 472 London 1851 4 480 London 1857 Publisher Title \ 0 S. Tinsley & Co. Walter Forbes. [A novel.] By A. A 1 Virtue & Co. All for Greed. [A novel. The dedication signed... 2 Bradbury, Evans & Co. Love the Avenger. By the author of “All for Gr... 3 James Darling Welsh Sketches, chiefly ecclesiastical, to the... 4 Wertheim & Macintosh [The World in which I live, and my place in it... Author Flickr URL 0 A. A. http://www.flickr.com/photos/britishlibrary/ta... 1 A., A. A. http://www.flickr.com/photos/britishlibrary/ta... 2 A., A. A. http://www.flickr.com/photos/britishlibrary/ta... 3 A., E. S. http://www.flickr.com/photos/britishlibrary/ta... 4 A., E. S. http://www.flickr.com/photos/britishlibrary/ta... 5A Pandas 7 extends the functionality of NumPy arrays to allow for more versatile slicing and labeling. In many cases, it is helpful to use a uniquely valued identifying field of the data as its index.For example, in the dataset used in the previous section, it can be expected that when a librarian searches for a record, they may input the unique identifier (values in the 8 column) for a book:>>>
Let’s replace the existing index with this column using 9:>>>
Technical Detail: Unlike primary keys in SQL, a Pandas 7 doesn’t make any guarantee of being unique, although many indexing and merging operations will notice a speedup in runtime if it is.We can access each record in a straightforward way with 1. Although 1 may not have all that intuitive of a name, it allows us to do label-based indexing, which is the labeling of a row or record without regard to its position:>>>
In other words, 206 is the first label of the index. To access it by position, we could use 3, which does position-based indexing.Technical Detail: 4 is technically a and has some special that doesn’t conform exactly to most plain-vanilla Python instance methods.Previously, our index was a RangeIndex: integers starting from 5, analogous to Python’s built-in 6. By passing a column name to 9, we have changed the index to the values in 8.You may have noticed that we reassigned the variable to the object returned by the method with 9. This is because, by default, the method returns a modified copy of our object and does not make the changes directly to the object. We can avoid this by setting the 8 parameter:
Tidying up Fields in the DataSo far, we have removed unnecessary columns and changed the index of our 5 to something more sensible. In this section, we will clean specific columns and get them to a uniform format to get a better understanding of the dataset and enforce consistency. In particular, we will be cleaning 2 and 3.Upon inspection, all of the data types are currently the 4 , which is roughly analogous to 5 in native Python.It encapsulates any field that can’t be neatly fit as numerical or categorical data. This makes sense since we’re working with data that is initially a bunch of messy strings: >>>
One field where it makes sense to enforce a numeric value is the date of publication so that we can do calculations down the road: >>> 0A particular book can have only one date of publication. Therefore, we need to do the following:
Synthesizing these patterns, we can actually take advantage of a single regular expression to extract the publication year: >>> 1The regular expression above is meant to find any four digits at the beginning of a string, which suffices for our case. The above is a raw string (meaning that a backslash is no longer an escape character), which is standard practice with regular expressions. The 9 represents any digit, and 0 repeats this rule four times. The 1 character matches the start of a string, and the parentheses denote a capturing group, which signals to Pandas that we want to extract that part of the regex. (We want 1 to avoid cases where 3 starts off the string.)Let’s see what happens when we run this regex across our dataset: >>> 2Further Reading: Not familiar with regex? You can inspect the expression above at regex101.com and learn all about regular expressions with Regular Expressions: Regexes in Python. Technically, this column still has 4 dtype, but we can easily get its numerical version with 5:>>> 3This results in about one in every ten values being missing, which is a small price to pay for now being able to do computations on the remaining valid values: >>> 4Great! That’s done! Remove adsCombining df.set_index('Identifier', inplace=True) 5 Methods with NumPy to Clean ColumnsAbove, you may have noticed the use of 7. This attribute is a way to access speedy string operations in Pandas that largely mimic operations on native Python strings or compiled regular expressions, such as 8, 9, and 00.To clean the 3 field, we can combine Pandas 5 methods with NumPy’s 03 function, which is basically a vectorized form of Excel’s 04 macro. It has the following syntax:>>> 5Here, 05 is either an array-like object or a Boolean mask. 06 is the value to be used if 05 evaluates to 9, and 09 is the value to be used otherwise.Essentially, 10 takes each element in the object used for 05, checks whether that particular element evaluates to 9 in the context of the condition, and returns an 13 containing 06 or 09, depending on which applies.It can be nested into a compound if-then statement, allowing us to compute values based on multiple conditions: >>> 6We’ll be making use of these two functions to clean 3 since this column has string objects. Here are the contents of the column:>>> 7We see that for some rows, the place of publication is surrounded by other unnecessary information. If we were to look at more values, we would see that this is the case for only some rows that have their place of publication as ‘London’ or ‘Oxford’. Let’s take a look at two specific entries: >>> 8These two books were published in the same place, but one has hyphens in the name of the place while the other does not. To clean this column in one sweep, we can use 17 to get a Boolean mask.We clean the column as follows: >>> 9We combine them with 03:>>> 0Here, the 03 function is called in a nested structure, with 05 being a 9 of Booleans obtained with 17. The 23 method works similarly to the built-in used to find the occurrence of an entity in an iterable (or substring in a string).The replacement to be used is a string representing our desired place of publication. We also replace hyphens with a space with 25 and reassign to the column in our 5.Although there is more dirty data in this dataset, we will discuss only these two columns for now. Let’s have a look at the first five entries, which look a lot crisper than when we started out: >>> 1Note: At this point, 3 would be a good candidate for conversion to a 28 dtype, because we can encode the fairly small unique set of cities with integers. (The memory usage of a Categorical is proportional to the number of categories plus the length of the data; an object dtype is a constant times the length of the data.)Remove adsCleaning the Entire Dataset Using the >>> df = pd.read_csv('Datasets/BL-Flickr-Images-Book.csv') >>> df.head() Identifier Edition Statement Place of Publication \ 0 206 NaN London 1 216 NaN London; Virtue & Yorston 2 218 NaN London 3 472 NaN London 4 480 A new edition, revised, etc. London Date of Publication Publisher \ 0 1879 [1878] S. Tinsley & Co. 1 1868 Virtue & Co. 2 1869 Bradbury, Evans & Co. 3 1851 James Darling 4 1857 Wertheim & Macintosh Title Author \ 0 Walter Forbes. [A novel.] By A. A A. A. 1 All for Greed. [A novel. The dedication signed... A., A. A. 2 Love the Avenger. By the author of “All for Gr... A., A. A. 3 Welsh Sketches, chiefly ecclesiastical, to the... A., E. S. 4 [The World in which I live, and my place in it... A., E. S. Contributors Corporate Author \ 0 FORBES, Walter. NaN 1 BLAZE DE BURY, Marie Pauline Rose - Baroness NaN 2 BLAZE DE BURY, Marie Pauline Rose - Baroness NaN 3 Appleyard, Ernest Silvanus. NaN 4 BROOME, John Henry. NaN Corporate Contributors Former owner Engraver Issuance type \ 0 NaN NaN NaN monographic 1 NaN NaN NaN monographic 2 NaN NaN NaN monographic 3 NaN NaN NaN monographic 4 NaN NaN NaN monographic Flickr URL \ 0 http://www.flickr.com/photos/britishlibrary/ta... 1 http://www.flickr.com/photos/britishlibrary/ta... 2 http://www.flickr.com/photos/britishlibrary/ta... 3 http://www.flickr.com/photos/britishlibrary/ta... 4 http://www.flickr.com/photos/britishlibrary/ta... Shelfmarks 0 British Library HMNTS 12641.b.30. 1 British Library HMNTS 12626.cc.2. 2 British Library HMNTS 12625.dd.1. 3 British Library HMNTS 10369.bbb.15. 4 British Library HMNTS 9007.d.28. 29 FunctionIn certain situations, you will see that the “dirt” is not localized to one column but is more spread out. There are some instances where it would be helpful to apply a customized function to each cell or element of a DataFrame. Pandas 30 method is similar to the in-built 31 function and simply applies a function to all the elements in a 5.Let’s look at an example. We will create a 5 out of the “university_towns.txt” file: 2We see that we have periodic state names followed by the university towns in that state: 34. If we look at the way state names are written in the file, we’ll see that all of them have the “[edit]” substring in them.We can take advantage of this pattern by creating a list of 35 tuples and wrapping that list in a 5:>>> 3We can wrap this list in a DataFrame and set the columns as “State” and “RegionName”. Pandas will take each element in the list and set 37 to the left value and 38 to the right value.The resulting DataFrame looks like this: >>> 4While we could have cleaned these strings in the for loop above, Pandas makes it easy. We only need the state name and the town name and can remove everything else. While we could use Pandas’ 7 methods again here, we could also use 40 to map a Python callable to each element of the DataFrame.We have been using the term element, but what exactly do we mean by it? Consider the following “toy” DataFrame: >>> 5In this example, each cell (‘Mock’, ‘Dataset’, ‘Python’, ‘Pandas’, etc.) is an element. Therefore, 40 will apply a function to each of these independently. Let’s define that function:>>> 6Pandas’ 30 only takes one parameter, which is the function (callable) that should be applied to each element:>>> 7First, we define a Python function that takes an element from the 5 as its parameter. Inside the function, checks are performed to determine whether there’s a 44 or 3 in the element or not.Depending on the check, values are returned accordingly by the function. Finally, the 40 function is called on our object. Now the DataFrame is much neater:>>> 8The 40 method took each element from the DataFrame, passed it to the function, and the original value was replaced by the returned value. It’s that simple!Technical Detail: While it is a convenient and versatile method, 48 can have significant runtime for larger datasets, because it maps a Python callable to each individual element. In some cases, it can be more efficient to do vectorized operations that utilize Cython or NumPY (which, in turn, makes calls in C) under the hood.Remove adsRenaming Columns and Skipping RowsOften, the datasets you’ll work with will have either column names that are not easy to understand, or unimportant information in the first few and/or last rows, such as definitions of the terms in the dataset, or footnotes. In that case, we’d want to rename columns and skip certain rows so that we can drill down to necessary information with correct and sensible labels. To demonstrate how we can go about doing this, let’s first take a glance at the initial five rows of the “olympics.csv” dataset: 9Now, we’ll into a Pandas DataFrame: >>> 0This is messy indeed! The columns are the string form of integers indexed at 0. The row which should have been our header (i.e. the one to be used to set the column names) is at 49. This happened because our CSV file starts with 0, 1, 2, …, 15.Also, if we were to go to the source of this dataset, we’d see that 1 above should really be something like “Country”, 51 is supposed to represent “Summer Games”, 52 should be “Gold”, and so on.Therefore, we need to do two things:
We can skip rows and set the header while reading the CSV file by passing some parameters to the 53 function.This function takes a lot of optional parameters, but in this case we only need one ( 54) to remove the 0th row:>>> 1We now have the correct row set as the header and all unnecessary rows removed. Take note of how Pandas has changed the name of the column containing the name of the countries from 1 to 56.To rename the columns, we will make use of a DataFrame’s 57 method, which allows you to relabel an axis based on a mapping (in this case, a 58).Let’s start by defining a dictionary that maps current column names (as keys) to more usable ones (the dictionary’s values): >>> 2We call the 57 function on our object:>>> 3Setting inplace to 9 specifies that our changes be made directly to the object. Let’s see if this checks out:>>> 4Remove adsPython Data Cleaning: Recap and ResourcesIn this tutorial, you learned how you can drop unnecessary information from a dataset using the 4 function, as well as how to set an index for your dataset so that items in it can be referenced easily.Moreover, you learned how to clean 4 fields with the 7 accessor and how to clean the entire dataset using the 40 method. Lastly, we explored how to skip rows in a CSV file and rename columns using the 57 method.Knowing about data cleaning is very important, because it is a big part of data science. You now have a basic understanding of how Pandas and NumPy can be leveraged to clean datasets! Check out the links below to find additional resources that will help you on your Python data science journey:
Free Bonus: that points you to the best tutorials, videos, and books for improving your NumPy skills. Mark as Completed Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Data Cleaning With pandas and NumPy 🐍 Python Tricks 💌 Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.
About Malay Agarwal A tech geek with a philosophical mind and a hand that can wield a pen. » More about MalayEach tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are: Aldren Brad Joanna Master Real-World Python Skills With Unlimited Access to Real Python Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas: Level Up Your Python Skills » Master Real-World Python Skills Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas: Level Up Your Python Skills » What Do You Think? Rate this article: Tweet Share Share EmailWhat’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know. Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. and get answers to common questions in our support portal. 3 metode cara langkah langkah data cleaning?Cara melakukan data cleaning. Mendeteksi error. Langkah awal yang harus dilakukan adalah memantau notifikasi error atau corrupt. ... . 2. Hapus duplikat data atau data yang tidak perlu. ... . Perbaiki kesalahan struktur. ... . 4. Filter outlier yang tidak diinginkan. ... . Tangani data yang hilang. ... . 6. Validasi dan lakukan QA.. Apakah fungsi data cleaning dalam python?Data cleansing atau data cleaning merupakan suatu proses mendeteksi dan memperbaiki (atau menghapus) suatu record yang 'corrupt' atau tidak akurat berdasarkan sebuah record set, tabel, atau database.
Bagaimana cara pembersihan data?Langkah-langkah utama pembersihan data, meliputi memodifikasi dan menghapus bidang data yang salah dan tidak lengkap, mengidentifikasi dan menghapus informasi duplikat dan data yang tidak terkait, serta mengoreksi format, nilai yang hilang, dan kesalahan ejaan.
Data apa saja yang dibersihkan dalam proses data cleaning?Data cleansing atau yang disebut juga dengan data scrubbing merupakan suatu proses analisa mengenai kualitas dari data dengan mengubah. Bisa juga pengelola mengoreksi ataupun menghapus data tersebut. Data yang dibersihkan tersebut adalah data yang salah, rusak, tidak akurat, tidak lengkap dan salah format.
|