Skip to main content

HOW TO PARSE XML DATA TO A SAPRK DATAFRAME

Purpose:-

In one of my project, I had a ton of XML data to perse and process. XML is an excellent format with tags, more like key-value pair. JSON also is almost the same, but more like strip down version of XML, So JSON is very lightweight while XML is heavy.
Initially, we thought of using python to parse the data and convert it to JSON for the spark to process. However, the challenge is the size of the data. For the entire 566GB of data would take a long time for python to perse alone.
So the obvious choice was the pyspark. We want to perse the data with the schema to a data frame for post-processing. However, I don't think, out of box pysaprk support XML format.
This document will demonstrate how to work with XML in pyspark. This same method should work in spark with scala without significant changes.

Option 1:-

Use spark-xml parser from data bricks
Data bricks have 2 xml parser; one spark compiles with scala 2.11 and another one with scala 2.12.
Please make sure you use the correct one.
I have spark compiled with scala 2.11
We can include the dependency in a pom or sbt file. Then open the spark-shell(submit) with "--packages com.databricks:spark-xml_2.11:0.6.0"
Ref: https://stackoverflow.com/questions/50429315/read-xml-in-spark


Option 2:- 

Like my company, if you have difficulties in compiling the code with maven or sbt because of the dependency to download from the internet, then you can use the jar file from data bricks website.

scala 2.11: https://repo1.maven.org/maven2/com/databricks/spark-xml_2.11/0.6.0/spark-xml_2.11-0.6.0.jar
scala 2.12: https://repo1.maven.org/maven2/com/databricks/spark-xml_2.12/0.6.0/spark-xml_2.12-0.6.0.jar

While launching the spark-shell or submitting it, please include the JAR with --jars full/path/of the/jar
pyspark --jars /home/sandipan/Downloads/spark_jars/spark-xml_2.11-0.6.0.jar

How to parse the data:-

Very simple, read the xml with the format option and spark should infer the schema.

df = spark.read \
    .format("com.databricks.spark.xml") \
    .option("rootTag", "SmsRecords") \
    .option("rowTag", "sms") \
    .load("full/path/of/the/xml")

Note:- the "rootTag" is the starting of the xml tag or the main (toor) tag while the "rowTag" is the main row tag.








Screen capture of my code and data frame








Comments

Popular posts from this blog

How to download really big data sets for big data testing

For a long time, I have been working with big data technologies, like MapReduce, Spark, Hive, and very recently I have started working on AI/ML. For different types of bigdata framework testing and text analysis, I do have to do a large amount of data processing. We have a Hadoop cluster, where we usually do this. However recently, I had a situation where I had to crunch 100 GBs of data on my laptop. I didn't have the opportunity to put this data to our cluster, since it would require a lot of approval, working with admin to get space, opening up the firewall, etc. So I took up the challenge to get it done using my laptop. My system only has 16 Gb of ram and i5 processor. Another challenge was I do not have admin access, so I can not install any required software without approval. However, luckily I had Docker installed.  For processing the data I can use Spark on local mode as spark support parallel processing using CPU cores. As i5 has 4 cores and 4 threads, the sp...

How to Install Spark 3 on Windows 10

 I have been using spark for a long time. It is an excellent, distributed computation framework. I use this regularly at work, and I also have it installed on my local desktop and laptop. This document is to show the installation steps for installing spark 3+ on Windows 10 in a sudo distributed mode. Steps:- Install WSL2 https://docs.microsoft.com/en-us/windows/wsl/install-win10 Install Ubuntu 20.4 LTS from the Microsoft store. Install windows terminal form the Microsoft store. This step is optional. You can use PowerShell or MobaXterm Fire up the Ubuntu from WSL Once logged in, then go to home dir “ cd ~ ” For spark, we need  Python3 Java  Latest Scala Spark with Hadoop, zip file Let's download and install all the prerequisite install python sudo apt-get install software-properties-common sudo apt-get install python-software-properties install Java (open JDK) sudo apt-get install openjdk-8-jdk Check the java and javac version java -version javac -version Install Scala ...