Read a model output file
Arguments
- file_path
character string. Path to the file being validated relative to the hub's model-output directory.
- hub_path
Either a character string path to a local Modeling Hub directory or an object of class
<SubTreeFileSystem>
created using functionss3_bucket()
orgs_bucket()
by providing a string S3 or GCS bucket name or path to a Modeling Hub directory stored in the cloud. For more details consult the Using cloud storage (S3, GCS) in thearrow
package. The hub must be fully configured with validadmin.json
andtasks.json
files within thehub-config
directory.- coerce_types
character. What to coerce column types to on read.
hub
: (default) read in (csv
) or coerce (parquet
,arrow
) to hub schema. When coercing data types using thehub
schema, theoutput_type_id_datatype
can also be used to set theoutput_type_id
column data type manually.chr
: read in (csv
) or coerce (parquet
,arrow
) all columns to character.none
: No coercion. Usearrow
read_*
function defaults.
- output_type_id_datatype
character string. One of
"from_config"
,"auto"
,"character"
,"double"
,"integer"
,"logical"
,"Date"
. Defaults to"from_config"
which uses the setting in theoutput_type_id_datatype
property in thetasks.json
config file if available. If the property is not set in the config, the argument falls back to"auto"
which determines theoutput_type_id
data type automatically from thetasks.json
config file as the simplest data type required to represent all output type ID values across all output types in the hub. Other data type values can be used to override automatic determination. Note that attempting to coerceoutput_type_id
to a data type that is not valid for the data (e.g. trying to coerce"character"
values to"double"
) will likely result in an error or potentially unexpected behaviour so use with care.