A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://reactivex.io/RxJava/3.x/javadoc/io/reactivex/rxjava3/parallel/ParallelFlowable.html below:

ParallelFlowable (RxJava Javadoc 3.1.11)

Modifier and Type Method and Description <A,R> @NonNull Flowable<R> collect(@NonNull Collector<T,A,R> collector)

Reduces all values within a 'rail' and across 'rails' with a callbacks of the given

Collector

into one

Flowable

containing a single value.

<C> @NonNull ParallelFlowable<C> collect(@NonNull Supplier<? extends C> collectionSupplier, @NonNull BiConsumer<? super C,? super T> collector)

Collect the elements in each rail into a collection supplied via a collectionSupplier and collected into with a collector action, emitting the collection at the end.

<U> @NonNull ParallelFlowable<U> compose(@NonNull ParallelTransformer<T,U> composer)

Allows composing operators, in assembly time, on top of this ParallelFlowable and returns another ParallelFlowable with composed features.

<R> @NonNull ParallelFlowable<R> concatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper)

Generates and concatenates

Publisher

s on each 'rail', signalling errors immediately and generating 2 publishers upfront.

<R> @NonNull ParallelFlowable<R> concatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, int prefetch)

Generates and concatenates

Publisher

s on each 'rail', signalling errors immediately and using the given prefetch amount for generating

Publisher

s upfront.

<R> @NonNull ParallelFlowable<R> concatMapDelayError(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean tillTheEnd)

Generates and concatenates

Publisher

s on each 'rail', optionally delaying errors and generating 2 publishers upfront.

<R> @NonNull ParallelFlowable<R> concatMapDelayError(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, int prefetch, boolean tillTheEnd)

Generates and concatenates

Publisher

s on each 'rail', optionally delaying errors and using the given prefetch amount for generating

Publisher

s upfront.

@NonNull ParallelFlowable<T> doAfterNext(@NonNull Consumer<? super T> onAfterNext)

Call the specified consumer with the current element passing through any 'rail' after it has been delivered to downstream within the rail.

@NonNull ParallelFlowable<T> doAfterTerminated(@NonNull Action onAfterTerminate)

Run the specified

Action

when a 'rail' completes or signals an error.

@NonNull ParallelFlowable<T> doOnCancel(@NonNull Action onCancel)

Run the specified

Action

when a 'rail' receives a cancellation.

@NonNull ParallelFlowable<T> doOnComplete(@NonNull Action onComplete)

Run the specified

Action

when a 'rail' completes.

@NonNull ParallelFlowable<T> doOnError(@NonNull Consumer<? super Throwable> onError)

Call the specified consumer with the exception passing through any 'rail'.

@NonNull ParallelFlowable<T> doOnNext(@NonNull Consumer<? super T> onNext)

Call the specified consumer with the current element passing through any 'rail'.

@NonNull ParallelFlowable<T> doOnNext(@NonNull Consumer<? super T> onNext, @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)

Call the specified consumer with the current element passing through any 'rail' and handles errors based on the returned value by the handler function.

@NonNull ParallelFlowable<T> doOnNext(@NonNull Consumer<? super T> onNext, @NonNull ParallelFailureHandling errorHandler)

Call the specified consumer with the current element passing through any 'rail' and handles errors based on the given

ParallelFailureHandling

enumeration value.

@NonNull ParallelFlowable<T> doOnRequest(@NonNull LongConsumer onRequest)

Call the specified consumer with the request amount if any rail receives a request.

@NonNull ParallelFlowable<T> doOnSubscribe(@NonNull Consumer<? super Subscription> onSubscribe)

Call the specified callback when a 'rail' receives a

Subscription

from its upstream.

@NonNull ParallelFlowable<T> filter(@NonNull Predicate<? super T> predicate)

Filters the source values on each 'rail'.

@NonNull ParallelFlowable<T> filter(@NonNull Predicate<? super T> predicate, @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)

Filters the source values on each 'rail' and handles errors based on the returned value by the handler function.

@NonNull ParallelFlowable<T> filter(@NonNull Predicate<? super T> predicate, @NonNull ParallelFailureHandling errorHandler)

Filters the source values on each 'rail' and handles errors based on the given

ParallelFailureHandling

enumeration value.

<R> @NonNull ParallelFlowable<R> flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper)

Generates and flattens

Publisher

s on each 'rail'.

<R> @NonNull ParallelFlowable<R> flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError)

Generates and flattens

Publisher

s on each 'rail', optionally delaying errors.

<R> @NonNull ParallelFlowable<R> flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency)

Generates and flattens

Publisher

s on each 'rail', optionally delaying errors and having a total number of simultaneous subscriptions to the inner

Publisher

s.

<R> @NonNull ParallelFlowable<R> flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency, int prefetch)

Generates and flattens

Publisher

s on each 'rail', optionally delaying errors, having a total number of simultaneous subscriptions to the inner

Publisher

s and using the given prefetch amount for the inner

Publisher

s.

<U> @NonNull ParallelFlowable<U> flatMapIterable(@NonNull Function<? super T,? extends Iterable<? extends U>> mapper)

Returns a

ParallelFlowable

that merges each item emitted by the source on each rail with the values in an

Iterable

corresponding to that item that is generated by a selector.

<U> @NonNull ParallelFlowable<U> flatMapIterable(@NonNull Function<? super T,? extends Iterable<? extends U>> mapper, int bufferSize)

Returns a

ParallelFlowable

that merges each item emitted by the source

ParallelFlowable

with the values in an

Iterable

corresponding to that item that is generated by a selector.

<R> @NonNull ParallelFlowable<R> flatMapStream(@NonNull Function<? super T,? extends Stream<? extends R>> mapper)

Maps each upstream item on each rail into a

Stream

and emits the

Stream

's items to the downstream in a sequential fashion.

<R> @NonNull ParallelFlowable<R> flatMapStream(@NonNull Function<? super T,? extends Stream<? extends R>> mapper, int prefetch)

Maps each upstream item of each rail into a

Stream

and emits the

Stream

's items to the downstream in a sequential fashion.

static <T> @NonNull ParallelFlowable<T> from(@NonNull Publisher<? extends T> source)

Take a

Publisher

and prepare to consume it on multiple 'rails' (number of CPUs) in a round-robin fashion.

static <T> @NonNull ParallelFlowable<T> from(@NonNull Publisher<? extends T> source, int parallelism)

Take a

Publisher

and prepare to consume it on parallelism number of 'rails' in a round-robin fashion.

static <T> @NonNull ParallelFlowable<T> from(@NonNull Publisher<? extends T> source, int parallelism, int prefetch)

Take a

Publisher

and prepare to consume it on parallelism number of 'rails' , possibly ordered and round-robin fashion and use custom prefetch amount and queue for dealing with the source

Publisher

's values.

static <T> @NonNull ParallelFlowable<T> fromArray(Publisher<T>... publishers)

Wraps multiple

Publisher

s into a

ParallelFlowable

which runs them in parallel and unordered.

<R> @NonNull ParallelFlowable<R> map(@NonNull Function<? super T,? extends R> mapper)

Maps the source values on each 'rail' to another value.

<R> @NonNull ParallelFlowable<R> map(@NonNull Function<? super T,? extends R> mapper, @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)

Maps the source values on each 'rail' to another value and handles errors based on the returned value by the handler function.

<R> @NonNull ParallelFlowable<R> map(@NonNull Function<? super T,? extends R> mapper, @NonNull ParallelFailureHandling errorHandler)

Maps the source values on each 'rail' to another value and handles errors based on the given

ParallelFailureHandling

enumeration value.

<R> @NonNull ParallelFlowable<R> mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper)

Maps the source values on each 'rail' to an optional and emits its value if any.

<R> @NonNull ParallelFlowable<R> mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper, @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)

Maps the source values on each 'rail' to an optional and emits its value if any and handles errors based on the returned value by the handler function.

<R> @NonNull ParallelFlowable<R> mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper, @NonNull ParallelFailureHandling errorHandler)

Maps the source values on each 'rail' to an optional and emits its value if any and handles errors based on the given

ParallelFailureHandling

enumeration value.

abstract int parallelism()

Returns the number of expected parallel

Subscriber

s.

@NonNull Flowable<T> reduce(@NonNull BiFunction<T,T,T> reducer)

Reduces all values within a 'rail' and across 'rails' with a reducer function into one

Flowable

sequence.

<R> @NonNull ParallelFlowable<R> reduce(@NonNull Supplier<R> initialSupplier, @NonNull BiFunction<R,? super T,R> reducer)

Reduces all values within a 'rail' to a single value (with a possibly different type) via a reducer function that is initialized on each rail from an initialSupplier value.

@NonNull ParallelFlowable<T> runOn(@NonNull Scheduler scheduler)

Specifies where each 'rail' will observe its incoming values, specified via a

Scheduler

, with no work-stealing and default prefetch amount.

@NonNull ParallelFlowable<T> runOn(@NonNull Scheduler scheduler, int prefetch)

Specifies where each 'rail' will observe its incoming values, specified via a

Scheduler

, with possibly work-stealing and a given prefetch amount.

@NonNull Flowable<T> sequential()

Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular

Flowable

sequence, running with a default prefetch value for the rails.

@NonNull Flowable<T> sequential(int prefetch)

Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular

Flowable

sequence, running with a give prefetch value for the rails.

@NonNull Flowable<T> sequentialDelayError()

Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular

Flowable

sequence, running with a default prefetch value for the rails and delaying errors from all rails till all terminate.

@NonNull Flowable<T> sequentialDelayError(int prefetch)

Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular

Flowable

sequence, running with a give prefetch value for the rails and delaying errors from all rails till all terminate.

@NonNull Flowable<T> sorted(@NonNull Comparator<? super T> comparator)

Sorts the 'rails' of this

ParallelFlowable

and returns a

Flowable

that sequentially picks the smallest next value from the rails.

@NonNull Flowable<T> sorted(@NonNull Comparator<? super T> comparator, int capacityHint)

Sorts the 'rails' of this

ParallelFlowable

and returns a

Flowable

that sequentially picks the smallest next value from the rails.

abstract void subscribe(@NonNull Subscriber<? super T>[] subscribers)

Subscribes an array of

Subscriber

s to this

ParallelFlowable

and triggers the execution chain for all 'rails'.

<R> R to(@NonNull ParallelFlowableConverter<T,R> converter)

Calls the specified converter function during assembly time and returns its resulting value.

@NonNull Flowable<List<T>> toSortedList(@NonNull Comparator<? super T> comparator)

Sorts the 'rails' according to the comparator and returns a full sorted

List

as a

Flowable

.

@NonNull Flowable<List<T>> toSortedList(@NonNull Comparator<? super T> comparator, int capacityHint)

Sorts the 'rails' according to the comparator and returns a full sorted

List

as a

Flowable

.

protected boolean validate(@NonNull Subscriber<?>[] subscribers)

Validates the number of subscribers and returns true if their number matches the parallelism level of this ParallelFlowable.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4