pyspark.sql.functions.unix_timestamp#
- pyspark.sql.functions.unix_timestamp(timestamp=None, format='yyyy-MM-dd HH:mm:ss')[source]#
Convert time string with given pattern (‘yyyy-MM-dd HH:mm:ss’, by default) to Unix time stamp (in seconds), using the default timezone and the default locale, returns null if failed.
if timestamp is None, then it returns current timestamp.
New in version 1.5.0.
Changed in version 3.4.0: Supports Spark Connect.
- Parameters
- timestamp
Column
or column name, optional timestamps of string values.
- formatliteral string, optional
alternative format to use for converting (default: yyyy-MM-dd HH:mm:ss).
- timestamp
- Returns
Column
unix time as long integer.
Examples
>>> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles")
Example 1: Returns the current timestamp in UNIX.
>>> import pyspark.sql.functions as sf >>> spark.range(1).select(sf.unix_timestamp()).show() +----------+ | unix_time| +----------+ |1702018137| +----------+
Example 2: Using default format ‘yyyy-MM-dd HH:mm:ss’ parses the timestamp string.
>>> import pyspark.sql.functions as sf >>> df = spark.createDataFrame([('2015-04-08 12:12:12',)], ['ts']) >>> df.select('*', sf.unix_timestamp('ts')).show() +-------------------+---------------------------------------+ | ts|unix_timestamp(ts, yyyy-MM-dd HH:mm:ss)| +-------------------+---------------------------------------+ |2015-04-08 12:12:12| 1428520332| +-------------------+---------------------------------------+
Example 3: Using user-specified format ‘yyyy-MM-dd’ parses the timestamp string.
>>> import pyspark.sql.functions as sf >>> df = spark.createDataFrame([('2015-04-08',)], ['dt']) >>> df.select('*', sf.unix_timestamp('dt', 'yyyy-MM-dd')).show() +----------+------------------------------+ | dt|unix_timestamp(dt, yyyy-MM-dd)| +----------+------------------------------+ |2015-04-08| 1428476400| +----------+------------------------------+
>>> spark.conf.unset("spark.sql.session.timeZone")