pyspark.sql.DataFrameReader.orc¶
-
DataFrameReader.
orc
(path, mergeSchema=None, pathGlobFilter=None, recursiveFileLookup=None, modifiedBefore=None, modifiedAfter=None)[source]¶ Loads ORC files, returning the result as a
DataFrame
.New in version 1.5.0.
- Parameters
- pathstr or list
- Other Parameters
- Extra options
For the extra options, refer to Data Source Option in the version you use.
Examples
>>> df = spark.read.orc('python/test_support/sql/orc_partitioned') >>> df.dtypes [('a', 'bigint'), ('b', 'int'), ('c', 'int')]