Safe Haskell | Safe-Inferred |
---|---|
Language | Haskell2010 |
Synopsis
- pmap :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> (a -> b) -> ListT m a -> ContT r m (ListT m b)
- pmap' :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> Pipe a b m () -> ListT m a -> ContT r m (ListT m b)
- pmapGroup :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> (ListT m a -> ListT m b) -> ListT m a -> ContT r m (ListT m b)
- bufferedCollate :: (MonadIO m, MonadBaseControl IO m) => Buffer batch -> Int -> ([sample] -> Maybe batch) -> ListT m sample -> ContT r m (ListT m batch)
- collate :: Monad m => Int -> ([sample] -> Maybe batch) -> ListT m sample -> ListT m batch
- enumerateData :: Monad m => ListT m a -> Producer (a, Int) m ()
- data CachedDataset (m :: * -> *) sample
- cache :: Monad m => ListT m sample -> m (CachedDataset m sample)
Documentation
pmap :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> (a -> b) -> ListT m a -> ContT r m (ListT m b) Source #
Run a map function in parallel over the given stream.
pmap' :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> Pipe a b m () -> ListT m a -> ContT r m (ListT m b) Source #
Run a pipe in parallel over the given stream.
pmapGroup :: (MonadIO m, MonadBaseControl IO m) => Buffer b -> (ListT m a -> ListT m b) -> ListT m a -> ContT r m (ListT m b) Source #
Map a ListT transform over the given the stream in parallel. This should be useful for using functions which groups elements of a stream and yields them downstream.
bufferedCollate :: (MonadIO m, MonadBaseControl IO m) => Buffer batch -> Int -> ([sample] -> Maybe batch) -> ListT m sample -> ContT r m (ListT m batch) Source #
Run a given batching function in parallel. See collate
for how the
given samples are batched.
collate :: Monad m => Int -> ([sample] -> Maybe batch) -> ListT m sample -> ListT m batch Source #
Run a batching function with integer batch size over the given stream. The elements of the stream are split into lists of the given batch size and are collated with the given function. Only Just values are yielded downstream. If the last chunk of samples is less than the given batch size then the batching function will be passed a list of length less than batch size.
enumerateData :: Monad m => ListT m a -> Producer (a, Int) m () Source #
Enumerate the given stream, zipping each element with an index.
data CachedDataset (m :: * -> *) sample Source #
An In-Memory cached dataset. See the cache
function for
how to create a cached dataset.
Instances
Applicative m => Dataset (m :: Type -> Type) (CachedDataset m sample) Int (sample :: Type) Source # | |
Defined in Torch.Data.Utils |
cache :: Monad m => ListT m sample -> m (CachedDataset m sample) Source #
Enumerate a given stream and store it as a CachedDataset
. This function should
be used after a time consuming preprocessing pipeline and used in subsequent epochs
to avoid repeating the preprocessing pipeline.