Accessing Hub Data
Accessing hub data on the cloud
To ensure greater access to the data created by and submitted to this hub, real-time copies of its model-output, target, and configuration files are hosted on the Hubverse’s Amazon Web Services (AWS) infrastructure, in a public S3 bucket: [hub-bucket-name]
Note: For efficient storage, all model-output files in S3 are stored in parquet format, even if the original versions in the GitHub repository are .csv.
GitHub remains the primary interface for operating the hub and collecting forecasts from modelers. However, the mirrors of hub files on S3 are the most convenient way to access hub data without using git/GitHub or cloning the entire hub to your local machine.
The sections below provide examples for accessing hub data on the cloud, depending on your goals and preferred tools. The options include:
Access Method | Description |
---|---|
hubData (R) | Hubverse R client and R code for accessing hub data |
Polars (Python) | Python open-source library for data manipulation |
AWS command line interface | Download hub data to your machine and use hubData or Polars for local access |
In general, accessing the data directly from S3 (instead of downloading it first) is more convenient. However, if performance is critical (for example, you’re building an interactive visualization), or if you need to work offline, we recommend downloading the data first.
hubData, the Hubverse R client, can create an interactive session for accessing, filtering, and transforming hub model output data stored in S3.
hubData is a good choice if you:
- already use R for data analysis
- want to interactively explore hub data from the cloud without downloading it
- want to save a subset of the hub’s data (e.g., forecasts for a specific date or target) to your local machine
- want to save hub data in a different file format (e.g., parquet to .csv)
Installing hubData
To install hubData and its dependencies (including the dplyr and arrow packages), follow the instructions in the hubData documentation.
Using hubData
hubData’s connect_hub()
function returns an Arrow multi-file dataset that represents a hub’s model output data. The dataset can be filtered and transformed using dplyr and then materialized into a local data frame using the collect_hub()
function.
Accessing target data
[hubData will be updated to access target data once the Hubverse target data standards are finalized.]
Accessing model output data
Below is an example of using hubData to connect to a hub on S3 and filter the model output data.
library(dplyr)
library(hubData)
<- "[hub-bucket-name]"
bucket_name <- s3_bucket(bucket_name)
hub_bucket <- hubData::connect_hub(hub_bucket, file_format = "parquet", skip_checks = TRUE)
hub_con %>%
hub_con ::filter(location == "MA", output_type == "quantile") %>%
dplyr::collect_hub() hubData
The Hubverse team is currently developing a Python client (hubDataPy). Until hubDataPy is ready, the Polars library is a good option for working with hub data in S3. Similar to pandas, Polars is based on dataframes and series. However, Polars has a more straightforward API and is designed to work with larger-than-memory datasets.
Pandas users can access hub data as described below and then use the to_pandas()
method to convert a Polars dataframe to pandas format.
Polars is a good choice if you:
- already use Python for data analysis
- want to interactively explore hub data from the cloud without downloading it
- want to save a subset of the hub’s data (e.g., forecasts for a specific date or target) to your local machine
- want to save hub data in a different file format (e.g., parquet to .csv)
Installing polars
Use pip to install Polars:
pip install polars
Using Polars
The examples below use the Polars scan_parquet()
function, which returns a LazyFrame. LazyFrames do not perform computations until necessary, so any filtering and transforms you apply to the data are deferred until an explicit collect()
operation.
Accessing target data
Get all oracle-output files into a single DataFrame.
import polars as pl
= pl.scan_parquet(
oracle_data # the structure of the s3 link below will depend on how your hub organizes target data
"s3://[hub-bucket-name]/target-data/oracle-output/*/*.parquet",
={"skip_signature": "true"}
storage_options
)
# filter and transform as needed and collect into a dataframe, for example:
= oracle_data.filter(pl.col("location") == "MA").collect() oracle_dataframe
Accessing model output data
Get the model-output files for a specific team (all rounds). This example uses glob patterns to read from data multiple files into a single dataset.
import polars as pl
= pl.scan_parquet(
lf "s3://[hub-bucket-name]/model-output/[modeling team name]/*.parquet",
={"skip_signature": "true"}
storage_options )
Using partitions (hive-style)
If your data uses hive-style partitioning, Polars can use the partitions to filter the data before reading it.
from datetime import datetime
import polars as pl
= pl.scan_parquet(
oracle_data "s3://[hub-bucket-name]/target-data/oracle-output/",
=True,
hive_partitioning={"skip_signature": "true"}) \
storage_optionsfilter(pl.col("nowcast_date") == datetime(2025, 2, 5)) \
. .collect()
AWS provides a terminal-based command line interface (CLI) for exploring and downloading S3 files. This option is ideal if you:
- plan to work with hub data offline but don’t want to use git or GitHub
- want to download a subset of the data (instead of the entire hub)
- are using the data for an application that requires local storage or fast response times
Installing the AWS CLI
- Install the AWS CLI using the instructions here
- You can skip the instructions for setting up security credentials, since Hubverse data is public
Using the AWS CLI
When using the AWS CLI, the --no-sign-request
option is required, since it tells AWS to bypass a credential check (i.e., --no-sign-request
allows anonymous access to public S3 data).
Files in the bucket’s raw
directory should not be used for analysis (they’re for internal use only).
List all directories in the hub’s S3 bucket:
aws s3 ls [hub-bucket-name] --no-sign-request
List all files in the hub’s bucket:
aws s3 ls [hub-bucket-name] --recursive --no-sign-request
Download all of target-data contents to your current working directory:
aws s3 cp s3://[hub-bucket-name]/target-data/ . --recursive --no-sign-request
Download the model-output files for a specific team:
aws s3 cp s3://[hub-bucket-name]/[modeling-team-name]/UMass-flusion/ . --recursive --no-sign-request