A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://ofajardo.github.io/pyreadstat_documentation/_build/html/index.html below:

Website Navigation


Welcome to pyreadstat’s documentation! — pyreadstat 1.3.1 documentation

Welcome to pyreadstat’s documentation! Metadata Object Description

Each parsing function returns a metadata object in addition to a pandas dataframe. That object contains the following fields:

There are two functions to deal with value labels: set_value_labels and set_catalog_to_sas. You can read about them in the next section.

Functions Documentation
pyreadstat.pyreadstat.read_dta(filename_path, metadataonly=False, dates_as_pandas_datetime=False, apply_value_formats=False, formats_as_category=True, formats_as_ordered_category=False, encoding=None, usecols=None, user_missing=False, disable_datetime_conversion=False, row_limit=0, row_offset=0, output_format=None, extra_datetime_formats=None, extra_date_formats=None, extra_time_formats=None)

Read a STATA dta file

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. In Python 2.7 the string is assumed to be utf-8 encoded

  • metadataonly (bool, optional) – by default False. IF true, no data will be read but only metadata, so that you can get all elements in the metadata object. The data frame will be set with the correct column names but no data.

  • dates_as_pandas_datetime (bool, optional) – by default False. If true dates will be transformed to pandas datetime64 instead of date, effective only for pandas.

  • apply_value_formats (bool, optional) – by default False. If true it will change values in the dataframe for they value labels in the metadata, if any appropiate are found.

  • formats_as_category (bool, optional) – by default True. Takes effect only if apply_value_formats is True. If True, variables with values changed for their formatted version will be transformed into categories.

  • formats_as_ordered_category (bool, optional) – defaults to False. If True the variables having formats will be transformed into ordered categories/enum. it has precedence over formats_as_category, meaning if this is True, it will take effect irrespective of the value of formats_as_category.

  • encoding (str, optional) – Defaults to None. If set, the system will use the defined encoding instead of guessing it. It has to be an iconv-compatible name

  • usecols (list, optional) – a list with column names to read from the file. Only those columns will be imported. Case sensitive!

  • user_missing (bool, optional) – by default False, in this case user defined missing values are delivered as nan. If true, the missing values will be deliver as is, and an extra piece of information will be set in the metadata (missing_user_values) to be able to interpret those values as missing.

  • disable_datetime_conversion (bool, optional) – if True pyreadstat will not attempt to convert dates, datetimes and times to python objects but those columns will remain as numbers. In order to convert them later to an appropiate python object, the user can use the information about the original variable format stored in the metadata object in original_variable_types. Disabling datetime conversion speeds up reading files. In addition it helps to overcome situations where there are datetimes that are beyond the limits of python datetime (which is limited to year 10,000, dates beyond that will rise an Overflow error in pyreadstat).

  • row_limit (int, optional) – maximum number of rows to read. The default is 0 meaning unlimited.

  • row_offset (int, optional) – start reading rows after this offset. By default 0, meaning start with the first row not skipping anything.

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned, the user can then convert it to her preferred data format. Using dict is faster as the other types as the conversion to a dataframe is avoided.

  • extra_datetime_formats (list of str, optional) – formats to be parsed as python datetime objects

  • extra_date_formats (list of str, optional) – formats to be parsed as python date objects

  • extra_time_formats (list of str, optional) – formats to be parsed as python time objects

Returns:
  • data_frame (dataframe or dict) – a dataframe or dict with the data.

  • metadata – object with metadata. Look at the documentation for more information.

pyreadstat.pyreadstat.read_file_in_chunks(read_function, file_path, chunksize=100000, offset=0, limit=0, multiprocess=False, num_processes=4, num_rows=None, **kwargs)

Returns a generator that will allow to read a file in chunks.

If using multiprocessing, for Xport, Por and some defective sav files where the number of rows in the dataset canot be obtained from the metadata, the parameter num_rows must be set to a number equal or larger than the number of rows in the dataset. That information must be obtained by the user before running this function.

Parameters:
  • read_function (pyreadstat function) – a pyreadstat reading function

  • file_path (string) – path to the file to be read

  • chunksize (integer, optional) – size of the chunks to read

  • offset (integer, optional) – start reading the file after certain number of rows

  • limit (integer, optional) – stop reading the file after certain number of rows, will be added to offset

  • multiprocess (bool, optional) – use multiprocessing to read each chunk?

  • num_processes (integer, optional) – in case multiprocess is true, how many workers/processes to spawn?

  • num_rows (integer, optional) – number of rows in the dataset. If using multiprocessing it is obligatory for files where the number of rows cannot be obtained from the medatata, such as por and some defective xport and sav files. The user must obtain this value by reading the file without multiprocessing first or any other means. A number larger than the actual number of rows will work as well. Discarded if the number of rows can be obtained from the metadata or not using multiprocessing.

  • kwargs (dict, optional) – any other keyword argument to pass to the read_function. row_limit and row_offset will be discarded if present.

Yields:
  • data_frame (dataframe) – a dataframe with the data

  • metadata – object with metadata. Look at the documentation for more information.

  • it (generator) – A generator that reads the file in chunks.

pyreadstat.pyreadstat.read_file_multiprocessing(read_function, file_path, num_processes=None, num_rows=None, **kwargs)

Reads a file in parallel using multiprocessing. For Xport, Por and some defective sav files where the number of rows in the dataset canot be obtained from the metadata, the parameter num_rows must be set to a number equal or larger than the number of rows in the dataset. That information must be obtained by the user before running this function.

Parameters:
  • read_function (pyreadstat function) – a pyreadstat reading function

  • file_path (string) – path to the file to be read

  • num_processes (integer, optional) – number of processes to spawn, by default the min 4 and the max cores on the computer

  • num_rows (integer, optional) – number of rows in the dataset. Obligatory for files where the number of rows cannot be obtained from the medatata, such as por and some defective xport and sav files. The user must obtain this value by reading the file without multiprocessing first or any other means. A number larger than the actual number of rows will work as well. Discarded if the number of rows can be obtained from the metadata.

  • kwargs (dict, optional) – any other keyword argument to pass to the read_function.

Returns:
  • data_frame (dataframe) – a dataframe with the data

  • metadata – object with metadata. Look at the documentation for more information.

pyreadstat.pyreadstat.read_por(filename_path, metadataonly=False, dates_as_pandas_datetime=False, apply_value_formats=False, formats_as_category=True, formats_as_ordered_category=False, usecols=None, disable_datetime_conversion=False, row_limit=0, row_offset=0, output_format=None, extra_datetime_formats=None, extra_date_formats=None, extra_time_formats=None)

Read a SPSS por file. Files are assumed to be UTF-8 encoded, the encoding cannot be set to other.

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. In Python 2.7 the string is assumed to be utf-8 encoded

  • metadataonly (bool, optional) – by default False. IF true, no data will be read but only metadata, so that you can get all elements in the metadata object. The data frame will be set with the correct column names but no data. Notice that number_rows will be None as por files do not have the number of rows recorded in the file metadata.

  • dates_as_pandas_datetime (bool, optional) – by default False. If true dates will be transformed to pandas datetime64 instead of date, effective only for pandas.

  • apply_value_formats (bool, optional) – by default False. If true it will change values in the dataframe for they value labels in the metadata, if any appropiate are found.

  • formats_as_category (bool, optional) – by default True. Takes effect only if apply_value_formats is True. If True, variables with values changed for their formatted version will be transformed into categories.

  • formats_as_ordered_category (bool, optional) – defaults to False. If True the variables having formats will be transformed into ordered categories/enum. it has precedence over formats_as_category, meaning if this is True, it will take effect irrespective of the value of formats_as_category.

  • usecols (list, optional) – a list with column names to read from the file. Only those columns will be imported. Case sensitive!

  • disable_datetime_conversion (bool, optional) – if True pyreadstat will not attempt to convert dates, datetimes and times to python objects but those columns will remain as numbers. In order to convert them later to an appropiate python object, the user can use the information about the original variable format stored in the metadata object in original_variable_types. Disabling datetime conversion speeds up reading files. In addition it helps to overcome situations where there are datetimes that are beyond the limits of python datetime (which is limited to year 10,000, dates beyond that will rise an Overflow error in pyreadstat).

  • row_limit (int, optional) – maximum number of rows to read. The default is 0 meaning unlimited.

  • row_offset (int, optional) – start reading rows after this offset. By default 0, meaning start with the first row not skipping anything.

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned, the user can then convert it to her preferred data format. Using dict is faster as the other types as the conversion to a dataframe is avoided.

  • extra_datetime_formats (list of str, optional) – formats to be parsed as python datetime objects

  • extra_date_formats (list of str, optional) – formats to be parsed as python date objects

  • extra_time_formats (list of str, optional) – formats to be parsed as python time objects

Returns:
  • data_frame (dataframe or dict) – a dataframe or dict with the data.

  • metadata – object with metadata. Look at the documentation for more information.

pyreadstat.pyreadstat.read_sas7bcat(filename_path, encoding=None, output_format=None)

Read a SAS sas7bcat file. The returning dataframe will be empty. The metadata object will contain a dictionary value_labels that contains the formats. When parsing the sas7bdat file, in the metadata, the dictionary variable_to_label contains a map from variable name to the formats. In order to apply the catalog to the sas7bdat file use set_catalog_to_sas or pass the catalog file as an argument to read_sas7bdat directly. SAS catalog files are difficult ones, some of them can be read only in specific SAS version, may contain strange encodings etc. Therefore it may be that many catalog files are not readable from this application.

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. The string is assumed to be utf-8 encoded

  • encoding (str, optional) – Defaults to None. If set, the system will use the defined encoding instead of guessing it. It has to be an iconv-compatible name

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned. Notice that for this function the resulting object is always empty, this is done for consistency with other functions but has no impact on performance.

Returns:
  • data_frame (dataframe or dict) – a dataframe with the data (no data in this case, so will be always empty).

  • metadata – object with metadata. The member value_labels is the one that contains the formats. Look at the documentation for more information.

pyreadstat.pyreadstat.read_sas7bdat(filename_path, metadataonly=False, dates_as_pandas_datetime=False, catalog_file=None, formats_as_category=True, formats_as_ordered_category=False, encoding=None, usecols=None, user_missing=False, disable_datetime_conversion=False, row_limit=0, row_offset=0, output_format=None, extra_datetime_formats=None, extra_date_formats=None, extra_time_formats=None)

Read a SAS sas7bdat file. It accepts the path to a sas7bcat.

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. In python 2.7 the string is assumed to be utf-8 encoded.

  • metadataonly (bool, optional) – by default False. IF true, no data will be read but only metadata, so that you can get all elements in the metadata object. The data frame will be set with the correct column names but no data.

  • dates_as_pandas_datetime (bool, optional) – by default False. If true dates will be transformed to pandas datetime64 instead of date, effective only for pandas.

  • catalog_file (str, optional) – path to a sas7bcat file. By default is None. If not None, will parse the catalog file and replace the values by the formats in the catalog, if any appropiate is found. If this is not the behavior you are looking for, Use read_sas7bcat to parse the catalog independently of the sas7bdat and set_catalog_to_sas to apply the resulting format into sas7bdat files.

  • formats_as_category (bool, optional) – Will take effect only if the catalog_file was specified. If True the variables whose values were replaced by the formats will be transformed into categories.

  • formats_as_ordered_category (bool, optional) – defaults to False. If True the variables having formats will be transformed into ordered categories/enums. it has precedence over formats_as_category, meaning if this is True, it will take effect irrespective of the value of formats_as_category.

  • encoding (str, optional) – Defaults to None. If set, the system will use the defined encoding instead of guessing it. It has to be an iconv-compatible name

  • usecols (list, optional) – a list with column names to read from the file. Only those columns will be imported. Case sensitive!

  • user_missing (bool, optional) – by default False, in this case user defined missing values are delivered as nan. If true, the missing values will be deliver as is, and an extra piece of information will be set in the metadata (missing_user_values) to be able to interpret those values as missing.

  • disable_datetime_conversion (bool, optional) – if True pyreadstat will not attempt to convert dates, datetimes and times to python objects but those columns will remain as numbers. In order to convert them later to an appropiate python object, the user can use the information about the original variable format stored in the metadata object in original_variable_types. Disabling datetime conversion speeds up reading files. In addition it helps to overcome situations where there are datetimes that are beyond the limits of python datetime (which is limited to year 10,000, dates beyond that will rise an Overflow error in pyreadstat).

  • row_limit (int, optional) – maximum number of rows to read. The default is 0 meaning unlimited.

  • row_offset (int, optional) – start reading rows after this offset. By default 0, meaning start with the first row not skipping anything.

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned, the user can then convert it to her preferred data format. Using dict is faster as the other types as the conversion to a dataframe is avoided.

  • extra_datetime_formats (list of str, optional) – formats to be parsed as python datetime objects

  • extra_date_formats (list of str, optional) – formats to be parsed as python date objects

  • extra_time_formats (list of str, optional) – formats to be parsed as python time objects

Returns:
  • data_frame (dataframe or dict) – a dataframe or dict with the data.

  • metadata – object with metadata. The members variables_value_labels will be empty unless a valid catalog file is supplied. Look at the documentation for more information.

pyreadstat.pyreadstat.read_sav(filename_path, metadataonly=False, dates_as_pandas_datetime=False, apply_value_formats=False, formats_as_category=True, formats_as_ordered_category=False, encoding=None, usecols=None, user_missing=False, disable_datetime_conversion=False, row_limit=0, row_offset=0, output_format=None, extra_datetime_formats=None, extra_date_formats=None, extra_time_formats=None)

Read a SPSS sav or zsav (compressed) files

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. In Python 2.7 the string is assumed to be utf-8 encoded

  • metadataonly (bool, optional) – by default False. IF true, no data will be read but only metadata, so that you can get all elements in the metadata object. The data frame will be set with the correct column names but no data.

  • dates_as_pandas_datetime (bool, optional) – by default False. If true dates will be transformed to pandas datetime64 instead of date, effective only for pandas.

  • apply_value_formats (bool, optional) – by default False. If true it will change values in the dataframe for they value labels in the metadata, if any appropiate are found.

  • formats_as_category (bool, optional) – by default True. Takes effect only if apply_value_formats is True. If True, variables with values changed for their formatted version will be transformed into categories.

  • formats_as_ordered_category (bool, optional) – defaults to False. If True the variables having formats will be transformed into ordered categories/enum. it has precedence over formats_as_category, meaning if this is True, it will take effect irrespective of the value of formats_as_category.

  • encoding (str, optional) – Defaults to None. If set, the system will use the defined encoding instead of guessing it. It has to be an iconv-compatible name

  • usecols (list, optional) – a list with column names to read from the file. Only those columns will be imported. Case sensitive!

  • user_missing (bool, optional) – by default False, in this case user defined missing values are delivered as nan. If true, the missing values will be deliver as is, and an extra piece of information will be set in the metadata (missing_ranges) to be able to interpret those values as missing.

  • disable_datetime_conversion (bool, optional) – if True pyreadstat will not attempt to convert dates, datetimes and times to python objects but those columns will remain as numbers. In order to convert them later to an appropiate python object, the user can use the information about the original variable format stored in the metadata object in original_variable_types. Disabling datetime conversion speeds up reading files. In addition it helps to overcome situations where there are datetimes that are beyond the limits of python datetime (which is limited to year 10,000, dates beyond that will rise an Overflow error in pyreadstat).

  • row_limit (int, optional) – maximum number of rows to read. The default is 0 meaning unlimited.

  • row_offset (int, optional) – start reading rows after this offset. By default 0, meaning start with the first row not skipping anything.

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned, the user can then convert it to her preferred data format. Using dict is faster as the other types as the conversion to a dataframe is avoided.

  • extra_datetime_formats (list of str, optional) – formats to be parsed as python datetime objects

  • extra_date_formats (list of str, optional) – formats to be parsed as python date objects

  • extra_time_formats (list of str, optional) – formats to be parsed as python time objects

Returns:
  • data_frame (dataframe or dict) – a dataframe or dict with the data.

  • metadata – object with metadata. Look at the documentation for more information.

pyreadstat.pyreadstat.read_xport(filename_path, metadataonly=False, dates_as_pandas_datetime=False, encoding=None, usecols=None, disable_datetime_conversion=False, row_limit=0, row_offset=0, output_format=None, extra_datetime_formats=None, extra_date_formats=None, extra_time_formats=None)

Read a SAS xport file.

Parameters:
  • filename_path (str, bytes or Path-like object) – path to the file. In python 2.7 the string is assumed to be utf-8 encoded

  • metadataonly (bool, optional) – by default False. IF true, no data will be read but only metadata, so that you can get all elements in the metadata object. The data frame will be set with the correct column names but no data.

  • dates_as_pandas_datetime (bool, optional) – by default False. If true dates will be transformed to pandas datetime64 instead of date, effective only for pandas.

  • encoding (str, optional) – Defaults to None. If set, the system will use the defined encoding instead of guessing it. It has to be an iconv-compatible name

  • usecols (list, optional) – a list with column names to read from the file. Only those columns will be imported. Case sensitive!

  • disable_datetime_conversion (bool, optional) – if True pyreadstat will not attempt to convert dates, datetimes and times to python objects but those columns will remain as numbers. In order to convert them later to an appropiate python object, the user can use the information about the original variable format stored in the metadata object in original_variable_types. Disabling datetime conversion speeds up reading files. In addition it helps to overcome situations where there are datetimes that are beyond the limits of python datetime (which is limited to year 10,000, dates beyond that will rise an Overflow error in pyreadstat).

  • row_limit (int, optional) – maximum number of rows to read. The default is 0 meaning unlimited.

  • row_offset (int, optional) – start reading rows after this offset. By default 0, meaning start with the first row not skipping anything.

  • output_format (str, optional) – one of ‘pandas’ (default), ‘polars’ or ‘dict’. If ‘dict’ a dictionary with numpy arrays as values will be returned, the user can then convert it to her preferred data format. Using dict is faster as the other types as the conversion to a dataframe is avoided.

  • extra_datetime_formats (list of str, optional) – formats to be parsed as python datetime objects

  • extra_date_formats (list of str, optional) – formats to be parsed as python date objects

  • extra_time_formats (list of str, optional) – formats to be parsed as python time objects

Returns:
  • data_frame (dataframe or dict) – a dataframe or dict with the data.

  • metadata – object with metadata. Look at the documentation for more information.

pyreadstat.pyreadstat.write_dta(df, dst_path, file_label='', column_labels=None, version=15, variable_value_labels=None, missing_user_values=None, variable_format=None)

Writes a dataframe to a STATA dta file

Parameters:
  • df (dataframe) – dataframe to write to sav or zsav

  • dst_path (str or pathlib.Path) – full path to the result dta file

  • file_label (str, optional) – a label for the file

  • column_labels (list or dict, optional) – labels for columns (variables), if list must be the same length as the number of columns. Variables with no labels must be represented by None. If dict values must be variable names and values variable labels. In such case there is no need to include all variables; labels for non existent variables will be ignored with no warning or error.

  • version (int, optional) – dta file version, supported from 8 to 15, default is 15

  • variable_value_labels (dict, optional) – value labels, a dictionary with key variable name and value a dictionary with key values and values labels. Variable names must match variable names in the dataframe otherwise will be ignored. Value types must match the type of the column in the dataframe.

  • missing_user_values (dict, optional) – user defined missing values for numeric variables. Must be a dictionary with keys being variable names and values being a list of missing values. Missing values must be a single character between a and z.

  • variable_format (dict, optional) – sets the format of a variable. Must be a dictionary with keys being the variable names and values being strings defining the format. See README, setting variable formats section, for more information.

pyreadstat.pyreadstat.write_por(df, dst_path, file_label='', column_labels=None, variable_format=None)

Writes a dataframe to a SPSS POR file.

Parameters:
  • df (dataframe) – data frame to write to por

  • dst_path (str or pathlib.Path) – full path to the result por file

  • file_label (str, optional) – a label for the file

  • column_labels (list or dict, optional) – labels for columns (variables), if list must be the same length as the number of columns. Variables with no labels must be represented by None. If dict values must be variable names and values variable labels. In such case there is no need to include all variables; labels for non existent variables will be ignored with no warning or error.

  • variable_format (dict, optional) – sets the format of a variable. Must be a dictionary with keys being the variable names and values being strings defining the format. See README, setting variable formats section, for more information.

pyreadstat.pyreadstat.write_sav(df, dst_path, file_label='', column_labels=None, compress=False, row_compress=False, note=None, variable_value_labels=None, missing_ranges=None, variable_display_width=None, variable_measure=None, variable_format=None)

Writes a dataframe to a SPSS sav or zsav file.

Parameters:
  • df (dataframe) – dataframe to write to sav or zsav

  • dst_path (str or pathlib.Path) – full path to the result sav or zsav file

  • file_label (str, optional) – a label for the file

  • column_labels (list or dict, optional) – labels for columns (variables), if list must be the same length as the number of columns. Variables with no labels must be represented by None. If dict values must be variable names and values variable labels. In such case there is no need to include all variables; labels for non existent variables will be ignored with no warning or error.

  • compress (boolean, optional) – if true a zsav will be written, by default False, a sav is written

  • row_compress (boolean, optional) – if true it applies row compression, by default False, compress and row_compress cannot be both true at the same time

  • note (str or list of str, optional) – a note or list of notes to add to the file

  • variable_value_labels (dict, optional) – value labels, a dictionary with key variable name and value a dictionary with key values and values labels. Variable names must match variable names in the dataframe otherwise will be ignored. Value types must match the type of the column in the dataframe.

  • missing_ranges (dict, optional) – user defined missing values. Must be a dictionary with keys as variable names matching variable names in the dataframe. The values must be a list. Each element in that list can either be either a discrete numeric or string value (max 3 per variable) or a dictionary with keys ‘hi’ and ‘lo’ to indicate the upper and lower range for numeric values (max 1 range value + 1 discrete value per variable). hi and lo may also be the same value in which case it will be interpreted as a discrete missing value. For this to be effective, values in the dataframe must be the same as reported here and not NaN.

  • variable_display_width (dict, optional) – set the display width for variables. Must be a dictonary with keys being variable names and values being integers.

  • variable_measure (dict, optional) – sets the measure type for a variable. Must be a dictionary with keys being variable names and values being strings one of “nominal”, “ordinal”, “scale” or “unknown” (default).

  • variable_format (dict, optional) – sets the format of a variable. Must be a dictionary with keys being the variable names and values being strings defining the format. See README, setting variable formats section, for more information.

pyreadstat.pyreadstat.write_xport(df, dst_path, file_label='', column_labels=None, table_name=None, file_format_version=8, variable_format=None)

Writes a dataframe to a SAS Xport (xpt) file. If no table_name is specified the dataset has by default the name DATASET (take it into account if reading the file from SAS.) Versions 5 and 8 are supported, default is 8.

Parameters:
  • df (dataframe) – dataframe to write to xport

  • dst_path (str or pathlib.Path) – full path to the result xport file

  • file_label (str, optional) – a label for the file

  • column_labels (list or dict, optional) – labels for columns (variables), if list must be the same length as the number of columns. Variables with no labels must be represented by None. If dict values must be variable names and values variable labels. In such case there is no need to include all variables; labels for non existent variables will be ignored with no warning or error.

  • table_name (str, optional) – name of the dataset, by default DATASET

  • file_format_version (int, optional) – XPORT file version, either 8 or 5, default is 8

  • variable_format (dict, optional) – sets the format of a variable. Must be a dictionary with keys being the variable names and values being strings defining the format. See README, setting variable formats section, for more information.

Indices and tables

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4