
To compute the mean of a column in pyspark, we use the mean function.
Read a Dataset –
Let’s read a dataset to work with. We will use the clothing store sales data.
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read.format('csv') \
.options(header='true', inferSchema='true') \
.load('../data/clothing_store_sales.csv')
df.show(5)

Compute the Mean of a Column in PySpark –
To compute the mean of a column, we will use the mean function. Let’s compute the mean of the Age column.
from pyspark.sql.functions import mean
df.select(mean('Age')).show()

Related Posts –
- How to Compute Standard Deviation in PySpark?
- Compute Minimum and Maximum value of a Column in PySpark
- Count Number of Rows in a Column or DataFrame in PySpark
- describe() method – Compute Summary Statistics in PySpark