Why do I load the whole table in null when loading a dataframe in spark?

0

I'm using Apache Spark 2.3.0 but when I want to load the csv and then show its data with df.show the whole table appears in null and I do not understand why if the file does contain the data

val schema = StructType(Array(StructField("Rank",StringType,true),StructField("Grade", StringType, true),StructField("Channelname",StringType,true),StructField("Video Uploads",IntegerType,true), StructField("Suscribers",IntegerType,true),StructField("Videoviews",IntegerType,true)))

val df = sqlContext.read.format("com.databricks.spark.csv").option("header","true").schema(schema).load("33.csv")

    
asked by senseilex 11.10.2018 в 16:44
source

1 answer

0

You have tried to de-specify the delimiter:

.option("sep", ",")

so that it remains

val df = sqlContext.read.format("com.databricks.spark.csv").option("header","true").option("sep", ",").schema(schema).load("33.csv")
    
answered by 14.10.2018 в 13:44