If this helped you, please share!

Importing Stringified JSON Objects Into Pandas (Part 3)

Published December 14, 2017 in data , programming - 0 Comments

All python code in this post is Python 3.5+.

This post describes how I parsed movies_metadata.csv from the Kaggle movies dataset; a task I started in Part 1 and Part 2.

After some digging into the Pandas documentation and Stack Overflow, I found that the best solution to my parsing problems was to explicitly set the column names in the names parameter, create custom importers and set them in the converters parameter. This also allowed me to assign missing sentinal data values per data type (NaN for numerical data, None etc.). The chapter on handling missing data in the Python Data Science Handbook was useful here.

All columns expected to have numerical values were converted to floats except for the Movie Database IDs, which were converted to ints. Columns that contain NaNs are required to be either floats or objects.

    movies_metadata_df = pd.read_csv('{}/movies_metadata.csv'.format(path),
        header=0, delimiter=',',
        converters={'adult': convert_to_bool_object,
             'belongs_to_collection': convert_to_dict,
             'budget': convert_to_float,
             'genres': convert_to_dict_list,
             'homepage': cleanup_str,
             'tmdbId': convert_to_tmdb_id,
             'imdbId': format_imdb_id,
             'original_language': cleanup_str,
             'original_title': cleanup_str,
             'overview': cleanup_str,
             'popularity': convert_to_float,
             'poster_path': cleanup_str,
             'production_companies': convert_to_dict_list,
             'production_countries': convert_to_dict_list,
             'release_date': convert_to_date,
             'revenue': convert_to_float,
             'runtime': convert_to_float,
             'spoken_languages': convert_to_dict_list,
             'status': cleanup_str,
             'tagline': cleanup_str,
             'title': cleanup_str,
             'video': convert_to_bool_object,
             'vote_average': convert_to_float,
             'vote_count': convert_to_float})

The convert_to_dict_list function uses Abstract Syntax Trees to convert the stringified JSON objects to valid dictionary objects by evaluating the strings as lists of literal Python dicts. An alternative approach might have been to replace the single quotes in the data set with double quotes and then attempt to import as JSON objects. This approach, which was suggested in the dataset comments, was simpler and cleaner.

def convert_to_dict_list(string: str) -> Union[List[Dict], None]:
    Use Python's Active Syntax Tree (ast) to convert strings to a
    list of dict objects.
    This is an easy way to deal with the lists and the single-quoted dicts,
    which the JSON converter won't handle.
    # initial sanity check for malformed input: need minimum 2 characters
    # to form valid dict object
    if len(string) <= 1:
        return None

        converted_value = ast.literal_eval(string)
        if type(converted_value) is list:
            return converted_value
            return None
    except ValueError:
        return None

This row is typical of one of the malformed rows that was breaking pandas.read_csv, where one or more of the first columns are missing. Here, the first nine columns are missing:

 Rune Balot goes to a casino connected to the October corporation to try to wrap up her case once and for all.,1.931659,/zV8bHuSL6WXoD6FWogP9j4x80bL.jpg,"[{'name': 'Aniplex', 'id': 2883}, {'name': 'GoHands', 'id': 7759}, {'name': 'BROSTA TV', 'id': 7760}, {'name': 'Mardock Scramble Production Committee', 'id': 7761}, {'name': 'Sentai Filmworks', 'id': 33751}]","[{'iso_3166_1': 'US', 'name': 'United States of America'}, {'iso_3166_1': 'JP', 'name': 'Japan'}]",2012-09-29,0,68.0,"[{'iso_639_1': 'ja', 'name': '日本語'}]",Released,,Mardock Scramble: The Third Exhaust,False,7.0,12

Given the pattern of missing data, it made sense to identify and drop these rows before serializing and exporting DataFrames as pickle files. The first column was a boolean value indicating if the movie is an adult film or not. Returning a NaN value for ‘adult’ values that couldn’t be converted to a valid boolean gave me a convenient list of rows to drop. Interestingly, it turns out that rows with ‘adult’ values also had invalid Movie Database IDs and IMDB IDs. The columns that were expected to contain boolean values were converted to bool after the cleanup steps.

I was also faced with the problem of inconsistent movie ID column names across all the datasets. The movie IDs from IMDB are also stored inconsistently: with ‘tt’ prepended to the numeric ID (consistent with IDs used in IMDB URLs) in movies_metadata.csv and without in links.csv and links_small.csv. It made sense to clean up the headers and standardize column names across all the datasets during the import step, since I was setting column names explicitly. Cleaning up the IMDB IDs was also simple to do in the converter function. Reading the chapter on cleaning data in Data Wrangling with Python was helpful during this step.

I uploaded the full script as a GitHub Gist. The pickled DataFrames are useful for import into Jupyter notebooks to do analysis.

No comments yet

Leave a Reply: