Cache Pyspark

df_cache = spark.read.csv(["data.csv"],header=True,inferSchema=True)
df.cache().count()
Sore Stork