Abstract Dataloader: Dataloader Not Included¶
What is the Abstract Dataloader?¶
The abstract dataloader (ADL) is a minimalist specification for creating composable and interoperable dataloaders and data transformations, along with abstract template implementations and reusable generic components, including a pytorch interface.
Metadata─────────────────┐
│ ▼
└────►Sensor Synchronization
│ │
└────►Trace◄──┘
│
└────►Dataset───►Transform
The ADL's specifications and bundled implementations lean heavily on generic type annotations in order to enable type checking using static type checkers such as mypy or pyright and runtime (dynamic) type checkers such as beartype and typeguard, even when applying functor-like generic transforms such as sequence loading and transforms.
Structural Subtyping
Since the abstract dataloader uses python's structural subtyping - Protocol
- feature, the abstract_dataloader
is not a required dependency for using the abstract dataloader! Implementations which follow the specifications are fully interoperable, including with type checkers, even if they do not have any mutual dependencies - including this library.
Type Checking is Optional
While most of the non-documentation code in this library goes towards facilitating type checking of the abstract dataloader specifications, static and runtime type checking are fully optional, in line with Python's gradual typing paradigm.
Users also do not need to fully define the abstract dataloader's typed interfaces. For example, specifying a Sensor
instead of a Sensor[TData, TMetadata]
is perfectly valid, as type checkers will simply interpret the sensor as loading Any
data and accepting Any
metadata.
Why Abstract?¶
Loading, preprocessing, and training models on time-series data is ubiquitous in machine learning for cyber-physical systems. However, unlike mainstream machine learning research, which has largely standardized around "canonical modalities" in computer vision (RGB images) and natural language processing (ordinary unstructured text), each new setting, dataset, and modality comes with a new set of tasks, questions, challenges - and data types which must be loaded and processed.
This poses a substantial software engineering challenge. With many different modalities, processing algorithms which operate on the power set of those different modalities, and downstream tasks which also each depend on some subset of modalities, two undesirable potential outcomes emerge:
- Data loading and processing components fragment into an exponential number of incompatible chunks, each of which encapsulates its required loading and processing functionality in a slightly different way. The barrier this presents to rapid prototyping needs no further explanation.
- The various software components coalesce into a monolith which nominally supports the power set of all functionality. However, in addition to the compatibility issues that come with bundling heterogeneous requirements such as managing "non-dependencies" (i.e. dependencies which are required by the monolith, but not a particular task), this also presents a hidden challenge in that by support exponentially many possible configurations, such an architecture is also exponentially hard to debug and verify.
However, we do not believe that these outcomes are a foregone connclusion. In particular, we believe that it's possible to write "one true dataloader" which can scale while maintaining intercompability by not writing a common dataloader at all -- but rather a common specification for writing dataloaders. We call this the "abstract dataloader".
Setup¶
While it is not necessary to install the abstract_dataloader
in order to take advantage of ADL-compliant components, installing this library provides access to Protocol
types which describe each interface, as well as generic components which may be useful for working with ADL-compliant components.
The abstract_dataloader
is currently distributed using github:
Dependencies¶
As an explicit goal is to minimize dependency constraints, only the following dependencies are required:
-
python >= 3.10
: a somewhat recent version of python is required, since the python type annotation specifications are rapidly evolving. -
numpy >= 1.14
: any remotely recent version of numpy is compatible, with the1.14
minimum version only being required since this version first defined thenp.integer
type. -
jaxtyping >= 0.2.32
: a fairly recent version of jaxtyping is also required due to the rapid pace of type annotation tooling. In particular,jaxtyping 0.2.32
added support forTypeVar
as array types, which is helpful for expressing array type polymorphisms. -
typing_extensions >= 3.12
: we pull forward typing features from Python 3.12. This minimum version may be increased as we use newer typing features.
Minimum Python Version
We may consider upgrading our minimum python version in the future, since 3.11
and newer versions support useful typing-related features such as the Self
type.
Pytorch Integration
To use the optional pytorch integrations, we also require either torch >= 2.2
(first version to add torch.utils._pytree.tree_leaves
) or torch
and optree >= 0.13
(first "mostly stable" version) in order to have access to a fully-featured tree manipulation module. The included torch
extra will install the latest pytorch and optree, with constraints torch >= 2.2
and optree >= 0.13
.