top of page

Coffee and Tips Newsletter

Assine nossa newsletter para receber tutoriais Tech, reviews de dispositivos e notícias do mundo Tech no seu email

Nos vemos em breve!

Foto do escritorJP

Working with Schemas in Spark Dataframes using PySpark




What's a schema in the Dataframes context?


Schemas are metadata that allows working with a standardized Data. Well, that was my definition about schemas but we also can understanding schemas as a structure that represents a data context or a business model.


Spark enables using schemas with Dataframes and I believe that is a good point to keep data quality, reliability and we also can use these points to understand the data and connect to the business.


But if you know a little more about Dataframes, working with schema isn't a rule. Spark provides features that we can infer to a schema without defined schemas and reach to the same result, but depending on the data source, the inference couldn't work as we expect.


In this post we're going to create a simple Dataframe example that will read a CSV file without a schema and another one using a defined schema. Through examples we'll can see the advantages and disadvantages.


Let's to the work!



CSV File content

"type","country","engines","first_flight","number_built"
"Airbus A220","Canada",2,2013-03-02,179
"Airbus A320","France",2,1986-06-10,10066
"Airbus A330","France",2,1992-01-02,1521
"Boeing 737","USA",2,1967-08-03,10636
"Boeing 747","USA",4,1969-12-12,1562
"Boeing 767","USA",2,1981-03-22,1219

If you noticed in the content above, we have different data types. We have string, numeric and date column types. The content above will be represented by airliners.csv in the code.



Writing a Dataframe without Schema

from pyspark.sql import SparkSession

if __name__ == "__main__":
    spark = SparkSession.builder \
        .master("local[1]") \
        .appName("schema-app") \
        .getOrCreate()

    air_liners_df = spark.read \
        .option("header", "true") \
        .format("csv") \
        .load("airliners.csv")

    air_liners_df.show()
    air_liners_df.printSchema()

Dataframe/Print schema result


It seems that worked fine but if you look with attention, you'll realize that in the schema structure there are some field types that don't match with their values, for example fields like number_built, engines and first_flight. They aren't string types, right?


We can try to fix it adding the following parameter called "inferSchema" and setting up to "true".

from pyspark.sql import SparkSession

if __name__ == "__main__":
    spark = SparkSession.builder \
        .master("local[1]") \
        .appName("schema-app") \
        .getOrCreate()

    air_liners_df = spark.read \
        .option("header", "true") \
        .option("inferSchema", "true") \
        .format("csv") \
        .load("airliners.csv")

    air_liners_df.show()
    air_liners_df.printSchema()

Dataframe/Print schema result


Even inferring the schema, the field first_flight keeping as a string type. Let's try to use Dataframe with a defined schema to see if this details will be fixed.

 


Writing a Dataframe with Schema


Now it's possible to see the differences between the codes. We're adding an object that represents the schema. This schema describes the content in CSV file, you can note that we have to describe the column name and type.


from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StringType, IntegerType, DateType, StructField

if __name__ == "__main__":

    spark = SparkSession.builder \
        .master("local[1]") \
        .appName("schema-app") \
        .getOrCreate()

    StructSchema = StructType([
        StructField("type", StringType()),
        StructField("country", StringType()),
        StructField("engines", IntegerType()),
        StructField("first_flight", DateType()),
        StructField("number_built", IntegerType())
    ])

    air_liners_df = spark.read \
        .option("header", "true") \
        .format("csv") \
        .schema(StructSchema) \
        .load("airliners.csv")

    air_liners_df.show()
    air_liners_df.printSchema()

Dataframe/Print schema result


After we defined the schema, all the field types match with their values. This shows how important is to use schemas with Dataframes. Now it's possible to manipulate the data according to the type with no concerns.

 

Books to study and read


If you want to learn more about and reach a high level of knowledge, I strongly recommend reading the following book(s):



Spark: The Definitive Guide: Big Data Processing Made Simple is a complete reference for those who want to learn Spark and about the main Spark's feature. Reading this book you will understand about DataFrames, Spark SQL through practical examples. The author dives into Spark low-level APIs, RDDs and also about how Spark runs on a cluster and how to debug and monitor Spark clusters applications. The practical examples are in Scala and Python.











Beginning Apache Spark 3: With Dataframe, Spark SQL, Structured Streaming, and Spark Machine Library with the new version of Spark, this book explores the main Spark's features like Dataframes usage, Spark SQL that you can uses SQL to manipulate data and Structured Streaming to process data in real time. This book contains practical examples and code snippets to facilitate the reading.










High Performance Spark: Best Practices for Scaling and Optimizing Apache Spark is a book that explores best practices using Spark and Scala language to handle large-scale data applications, techniques for getting the most out of standard RDD transformations, how Spark SQL's new interfaces improve performance over SQL's RDD data structure, examples of Spark MLlib and Spark ML machine learning libraries usage and more.










Python Crash Course, 2nd Edition: A Hands-On, Project-Based Introduction to Programming covers the basic concepts of Python through interactive examples and best practices.











Learning Scala: Practical Functional Programming for the Jvm is an excellent book that covers Scala through examples and exercises. Reading this bool you will learn about the core data types, literals, values and variables. Building classes that compose one or more traits for full reusability, create new functionality by mixing them in at instantiation and more. Scala is one the main languages in Big Data projects around the world with a huge usage in big tech companies like Twitter and also the Spark's core language.











Cool? I hope you enjoyed it!



bottom of page