50000 word long text: Best Practices for stream and lambda expressions – PDF download

1. Introduction to streams

Today's stream refers to Java util. Many classes in the stream package. Stream can easily convert the previous combined classes into stream and process them in a streaming manner, which greatly simplifies our programming. The core of stream package is interface stream

From the above figure, we can see that stream inherits from basestream. Stream defines many practical methods, such as filter, map, flatmap, foreach, reduce, collect and so on. Next, we will explain them one by one.

1.1 create a stream

There are many ways to create a stream. After Java introduced stream, all collection classes added a stream () method, through which the corresponding stream can be directly obtained. You can also use stream Of method to create:

//Stream Creation
        String[] arr = new String[]{"a","b","c"};
        Stream<String> stream = Arrays.stream(arr);
        stream = Stream.of("a","c");

1.2 streams multithreading

If we want to use multithreading to process the data of collection classes, stream provides a very convenient multithreading method, parallelstream ():

//Multi-threading
        List<String> list =new ArrayList();
        list.add("aaa");
        list.add("bbb");
        list.add("abc");
        list.add("ccc");
        list.add("ddd");
        list.parallelStream().forEach(element -> doPrint(element));

1.3 basic operation of stream

Stream operations can be divided into two types. One is intermediate operations. Intermediate operations return stream, so they can be called cascade. The other is termination operation, which will return the type defined by stream.

//Operations
        long count = list.stream().distinct().count();

In the above example, distinct () returns a stream, so you can cascade operations. The last count () is a termination operation and returns the last value.

Matching

Stream provides three matching methods: anymatch(), allmatch(), nonematch(). Let's see how to use them:

//Matching
        boolean isValid = list.stream().anyMatch(element -> element.contains("h"));
        boolean isValidOne = list.stream().allMatch(element -> element.contains("h"));
        boolean isValidTwo = list.stream().noneMatch(element -> element.contains("h"));  

Filtering

The filter () method allows us to filter the data in the stream to get what we need:

Stream<String> filterStream = list.stream().filter(element -> element.contains("d"));

In the above example, we selected a string containing the letter "d" from the list.

Mapping

Map reprocesses the value in the stream, and then returns the processed value as a new stream.

//Mapping
        Stream<String> mappingStream = list.stream().map(element -> convertElement(element));

    private static String convertElement(String element) {
        return "element"+"abc";
    }

In the example above, we add "ABC" to each value in the list, and then return a new stream.

FlatMap

Flatmap is very similar to map, but they are different. Looking at the name, we can see that flatmap means to make a map after leveling.

How to understand?

Suppose we have a custbook class:

@Data
public class CustBook {

    List<String> bookName;
}

Custbook defines a bookname field.

Let's take a look at the results returned by the map:

List<CustBook> users = new ArrayList<>();
        users.add(new CustBook());
Stream<Stream<String>> userStreamMap
                = users.stream().map(user -> user.getBookName().stream());

In the above code, the map converts each user into a stream, so the final result is the stream that returns the stream.

If we only want to return string, we can use flatmap:

List<CustBook> users = new ArrayList<>();
        users.add(new CustBook());
        Stream<String> userStream
                = users.stream().map(user -> user.getBookName().stream());

To put it simply, flatmap paves the way for hierarchical relationships.

Reduction

Use the reduce () method to easily calculate the data of the set. Reduce () receives two parameters, the first is the start value, followed by a function to represent accumulation.

//Reduction
        List<Integer> integers = Arrays.asList(1,1,1);
        Integer reduced = integers.stream().reduce(100,(a,b) -> a + b);

In the above example, we define 3 1 list, then call reduce (100, b) - > A + b), and the final result is 103..

Collecting

The collect () method can easily convert the stream into a collection class again, which is convenient for processing and display:

List<String> resultList
                = list.stream().map(element -> element.toUpperCase()).collect(Collectors.toList());

2. Classification and use of functional interfaces

Java 8 introduces lambda expression, which actually represents an anonymous function.

Before Java 8, if you need to use anonymous function, you need to implement a new class, but with lambda expressions, everything becomes very simple.

Let's look at an example when we talked about thread pool earlier:

//ExecutorService using class
        ExecutorService executorService = Executors.newSingleThreadExecutor();
        executorService.submit(new Runnable() {
            @Override
            public void run() {
            log.info("new runnable");
            }
        });

executorService. Submit needs to receive a runnable class. In the above example, we new a runnable class and implement its run () method.

If the above example is rewritten with lambda expression, it is as follows:

//ExecutorService using lambda
        executorService.submit(()->log.info("new runnable"));

It seems that it is not very simple. You can omit the construction of anonymous classes by using lambda expressions, and it is more readable.

So can all anonymous classes be refactored with lambda expressions? Neither.

Let's take a look at the characteristics of runnable class:

@FunctionalInterface
public interface Runnable 

The runnable class has an @ functionalinterface annotation on it. This annotation is the functional interface we want to talk about today.

2.1 Functional Interface

Functional interface refers to the interface with @ functional interface annotation. It is characterized by the abstract method in which only one subclass must be implemented. If the abstract method is preceded by the default keyword, no calculation will be performed.

In fact, this is also easy to understand, because after the functional interface is rewritten into a lambda expression, it does not specify which method to implement. If there are multiple methods to implement, there will be problems.

@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface FunctionalInterface {}

Functional interfaces are generally in Java util. Function package.

Functional interfaces can be divided into many types according to the method parameters and return values to be implemented. We will introduce them respectively below.

2.2 function: one parameter and one return value

The function interface defines a method that receives a parameter and returns a parameter.

@FunctionalInterface
public interface Function<T,R> {

    /**
     * Applies this function to the given argument.
     *
     * @param t the function argument
     * @return the function result
     */
    R apply(T t);

Generally, we will use function when processing collection classes.

Map<String,Integer> nameMap = new HashMap<>();
        Integer value = nameMap.computeIfAbsent("name",s -> s.length());

In the above example, we called the computeifabsent method of map and passed in a function.

The above example can also be rewritten into a shorter one:

Integer value1 = nameMap.computeIfAbsent("name",String::length);

Function does not specify the type of parameter and return value. If you need to pass in specific parameters, you can use intfunction, longfunction and doublefunction:

@FunctionalInterface
public interface IntFunction<R> {

    /**
     * Applies this function to the given argument.
     *
     * @param value the function argument
     * @return the function result
     */
    R apply(int value);
}

If you need to return specific parameters, you can use tointfunction, tolongfunction, todoublefunction:

@FunctionalInterface
public interface ToDoubleFunction<T> {

    /**
     * Applies this function to the given argument.
     *
     * @param value the function argument
     * @return the function result
     */
    double applyAsDouble(T value);
}

If you want to specify both parameters and return values, you can use doubletointfunction, doubletolongfunction, inttodoublefunction, inttolongfunction, longtointfunction, longtodoublefunction:

@FunctionalInterface
public interface LongToIntFunction {

    /**
     * Applies this function to the given argument.
     *
     * @param value the function argument
     * @return the function result
     */
    int applyAsInt(long value);
}

2.3 bifunction: receive two parameters, one return value

If you need to accept two parameters and one return value, you can use bifunction: bifunction, todoublebifunction, tointbifunction, tolongbifunction, etc.

@FunctionalInterface
public interface BiFunction<T,U,R> {

    /**
     * Applies this function to the given arguments.
     *
     * @param t the first function argument
     * @param u the second function argument
     * @return the function result
     */
    R apply(T t,U u);

Let's take an example of bifunction:

//BiFunction
        Map<String,Integer> salaries = new HashMap<>();
        salaries.put("alice",100);
        salaries.put("jack",200);
        salaries.put("mark",300);

        salaries.replaceAll((name,oldValue) ->
                name.equals("alice") ? oldValue : oldValue + 200);

2.4 supplier: function without parameters

If you don't need any parameters, you can use supplier:

@FunctionalInterface
public interface supplier<T> {

    /**
     * Gets a result.
     *
     * @return a result
     */
    T get();
}

2.5 consumer: receives a parameter and does not return a value

Consumer receives a parameter but does not return any value. Let's see the definition of consumer:

@FunctionalInterface
public interface Consumer<T> {

    /**
     * Performs this operation on the given argument.
     *
     * @param t the input argument
     */
    void accept(T t);

See the specific application of a consumer:

//Consumer
        nameMap.forEach((name,age) -> System.out.println(name + " is " + age + " years old"));

2.6 predicate: receive a parameter and return Boolean

Predict receives a parameter and returns a Boolean value:

@FunctionalInterface
public interface Predicate<T> {

    /**
     * Evaluates this predicate on the given argument.
     *
     * @param t the input argument
     * @return {@code true} if the input argument matches the predicate,* otherwise {@code false}
     */
    boolean test(T t);

It is excellent if it is used for collection class filtering:

//Predicate
        List<String> names = Arrays.asList("A","B","C","D","E");
        List<String> namesWithA = names.stream()
                .filter(name -> name.startsWith("A"))
                .collect(Collectors.toList());

2.7 operator: receive and return the same type

Operators receive and return the same types. There are many types of operators: unaryoperator, binaryoperator, doubleunaryoperator, intunaryoperator, longunaryoperator, doublebinaryoperator, intbinaryoperator, longbinaryoperator, etc.

@FunctionalInterface
public interface IntUnaryOperator {

    /**
     * Applies this operator to the given operand.
     *
     * @param operand the operand
     * @return the operator result
     */
    int applyAsInt(int operand);

Let's take an example of binaryoperator:

 //Operator
        List<Integer> values = Arrays.asList(1,2,3,4,5);
        int sum = values.stream()
                .reduce(0,(i1,i2) -> i1 + i2);

3. Lambda expression best practices

Lambda expression is a functional programming framework introduced by Java 8. In previous articles, we also talked about the basic usage of lambda expressions.

Based on the previous articles, this paper will explain the best practice experience of lambda expression in practical application in more detail.

3.1 standard functional interfaces are preferred

As we mentioned in the previous article, Java is in Java util. Many function interfaces are defined in the function package. It basically covers all types we can think of.

Suppose we customize the following functional interface:

@FunctionalInterface
public interface Usage {
    String method(String string);
}

Then we need to pass in the interface in a test method:

public String test(String string,Usage usage) {
    return usage.method(string);
}

The function interface defined above needs to implement the method method, receive a string and return a string. In this way, we can use function instead of:

public String test(String string,Function<String,String> fn) {
    return fn.apply(string);
}

The advantage of using a standard interface is that you don't build wheels again.

3.2 using @ functionalinterface annotation

Although @ functional interface is not required, you can define a functional interface without @ functional interface.

However, using @ functional interface can alarm when the definition of functional interface is violated.

If you are maintaining a large project, adding the @ functionalinterface annotation can clearly let others understand the role of this class.

This makes the code more standardized and more usable.

So we need to define it this way:

@FunctionalInterface
public interface Usage {
    String method(String string);
}

instead of:

public interface Usage {
    String method(String string);
}

3.3 do not abuse default methods in functional interfaces

A functional interface is an interface with only one abstract method that is not implemented.

If there are multiple methods in the interface, you can use the default keyword to provide a default implementation for it.

However, we know that interfaces can be inherited, and one class can implement multiple interfaces. If the same default method is defined in multiple interfaces, an error will be reported.

Generally speaking, the default keyword is generally used in upgrading projects to avoid code errors.

3.4 using lambda expressions to instantiate functional interfaces

Or the above example:

@FunctionalInterface
public interface Usage {
    String method(String string);
}

To instantiate usage, we can use the new keyword:

Usage usage = new Usage() {
    @Override
    public String method(String string) {
        return string;
    }
};

But the best way is to use a lambda expression:

Usage usage = parameter -> parameter;

3.5 do not override the method with functional interface as a parameter

How to understand? Let's look at the following two methods:

public class ProcessorImpl implements Processor {
    @Override
    public String process(Callable<String> c) throws Exception {
        // implementation details
    }
 
    @Override
    public String process(supplier<String> s) {
        // implementation details
    }
}

The method names of the two methods are the same, only the parameters passed in are different. However, both parameters are functional interfaces, which can be expressed by the same lambda expression.

When calling:

String result = processor.process(() -> "test");

Because you can't tell which method to call, an error will be reported.

The best way is to change the names of the two methods to different ones.

3.6 lambda expressions and inner classes are different

Although we mentioned earlier that using lambda expressions can replace inner classes. However, the scope of the two is different.

In the inner class, a new scope will be created. Within this scope, you can define a new variable and reference it with this.

However, there is no new scope defined in the lambda expression. If this is used in the lambda expression, it points to an external class.

Let's take an example:

private String value = "Outer scope value";

public String scopeExperiment() {
    Usage usage = new Usage() {
        String value = "Inner class value";
 
        @Override
        public String method(String string) {
            return this.value;
        }
    };
    String result = usage.method("");
 
    Usage usageLambda = parameter -> {
        String value = "Lambda value";
        return this.value;
    };
    String resultLambda = usageLambda.method("");
 
    return "Results: result = " + result + 
      ",resultLambda = " + resultLambda;
}

The above example will output "results: result = inner class value, resultlambda = outer scope value"

3.7 lambda expression should be as concise as possible

Usually one line of code is enough. If you have a lot of logic, you can encapsulate these logic into one method and call this method in lambda expression.

Because lambda expression is still an expression after all, the shorter the expression, the better.

Java judges the passed in parameter type through type inference, so we try not to pass the parameter type in the parameters of lambda expression, as follows:

(a,b) -> a.toLowerCase() + b.toLowerCase();

instead of:

(String a,String b) -> a.toLowerCase() + b.toLowerCase();

If there is only one parameter, parentheses are not required:

a -> a.toLowerCase();

instead of:

(a) -> a.toLowerCase();

Return value does not need return:

a -> a.toLowerCase();

instead of:

a -> {return a.toLowerCase()};

3.8 use method reference

To make lambda expressions more concise, when method references can be used, we can use method references:

a -> a.toLowerCase();

Can be replaced by:

String::toLowerCase;

3.9 effectively final variable

If a non final variable is referenced in a lambda expression, an error will be reported.

What does "effectively final" mean? This is an approximate final meaning. As long as a variable is assigned only once, the compiler will treat the variable as effectively final.

    String localVariable = "Local";
    Usage usage = parameter -> {
         localVariable = parameter;
        return localVariable;
    };

In the above example, localvariable is assigned twice, so it is not an effectively final variable, and an error will be reported during compilation.

Why set it like this? Because lambda expressions are usually used in parallel computing, when multiple threads access variables at the same time, the effectively final variable can prevent unexpected modifications.

4. Implement if / else logic in stream expression

In stream processing, we usually encounter the judgment of if / else. How do we deal with such problems?

Remember that we mentioned in the previous article lambda best practices that lambda expressions should be as concise as possible, and do not write bloated business logic in them.

Next, let's look at a specific example.

4.1 traditional writing

If we have a list of 1 to 10, we want to select odd and even numbers respectively. In the traditional way, we will use this:

    public void inForEach(){
        List<Integer> ints = Arrays.asList(1,5,6,7,8,9,10);

        ints.stream()
                .forEach(i -> {
                    if (i.intValue() % 2 == 0) {
                        System.out.println("i is even");
                    } else {
                        System.out.println("i is old");
                    }
                });
    }

In the above example, we put the logic of if / else into foreach. Although there is no problem, the code is very bloated.

Let's see how to rewrite it.

4.2 using filter

We can rewrite the logic of if / else into two filters:

List<Integer> ints = Arrays.asList(1,10);

        Stream<Integer> evenIntegers = ints.stream()
                .filter(i -> i.intValue() % 2 == 0);
        Stream<Integer> oddIntegers = ints.stream()
                .filter(i -> i.intValue() % 2 != 0);

With these two filters, use for each in the stream after the filter:

        evenIntegers.forEach(i -> System.out.println("i is even"));
        oddIntegers.forEach(i -> System.out.println("i is old"));

How, is the code very concise and clear.

5. Use stream in map

Map is a very common collection type in Java. We usually need to traverse the map to obtain some values. Java 8 introduces the concept of stream, so how can we use stream in map?

5.1 basic concepts

Map has key and value, and an entry representing the whole of key and value.

Create a map:

Map<String,String> someMap = new HashMap<>();

Get entryset of map:

Set<Map.Entry<String,String>> entries = someMap.entrySet();

Get the key of the map:

Set<String> keySet = someMap.keySet();

Get the value of the map:

Collection<String> values = someMap.values();

We can see that there are several sets: map, set and collection.

Except that map has no stream, the other two have stream methods:

Stream<Map.Entry<String,String>> entriesStream = entries.stream();
        Stream<String> valuesStream = values.stream();
        Stream<String> keysStream = keySet.stream();

We can traverse the map through several other streams.

5.2 using stream to obtain the key of map

Let's add a few values to the map first:

someMap.put("jack","20");
someMap.put("bill","35");

We added the name and age fields above.

If we want to find the key with age = 20, we can do this:

Optional<String> optionalName = someMap.entrySet().stream()
                .filter(e -> "20".equals(e.getValue()))
                .map(Map.Entry::getKey)
                .findFirst();

        log.info(optionalName.get());

Because the returned value is optional, if the value does not exist, we can also process:

optionalName = someMap.entrySet().stream()
                .filter(e -> "Non ages".equals(e.getValue()))
                .map(Map.Entry::getKey).findFirst();

        log.info("{}",optionalName.isPresent());

In the above example, we call ispresent to determine whether age exists.

If there are multiple values, we can write:

someMap.put("alice","20");
        List<String> listnames = someMap.entrySet().stream()
                .filter(e -> e.getValue().equals("20"))
                .map(Map.Entry::getKey)
                .collect(Collectors.toList());

        log.info("{}",listnames);

Above, we called collect (collectors. Tolist()) to convert the value into a list.

5.3 using stream to obtain the value of map

We can get the key of the map above. Similarly, we can also get the value of the map:

List<String> listAges = someMap.entrySet().stream()
                .filter(e -> e.getKey().equals("alice"))
                .map(Map.Entry::getValue)
                .collect(Collectors.toList());

        log.info("{}",listAges);

Above, we matched that the key value is the value of Alice.

6. Operation type in stream and use of PEEK

As a stream operation, Java 8 stream has two types of operations: intermediate operation and termination operation. What's the difference between the two?

Let's look at an example of PEEK:

Stream<String> stream = Stream.of("one","two","three","four");
        stream.peek(System.out::println);

In the above example, our intention is to print out the value of stream, but there is actually no output.

Why?

6.1 intermediate operation and termination operation

A Java 8 stream consists of three parts. Data source, zero or one or more intermediate operations, one or zero termination operations.

Intermediate operations are data processing. Note that intermediate operations are lazy operations and will not be started immediately. They will not be executed until the operation is terminated.

The termination operation is the start operation of the stream. Only when the termination operation is added can the stream really start to execute.

Therefore, the problem is solved. Peek is an intermediate operation, so the above example has no output.

6.2 peek

Let's take a look at peek's documentation: PEEK is mainly used for debugging.

Let's look at the use of debug:

Stream.of("one","four").filter(e -> e.length() > 3)
                .peek(e -> System.out.println("Filtered value: " + e))
                .map(String::toUpperCase)
                .peek(e -> System.out.println("Mapped value: " + e))
                .collect(Collectors.toList());

The above example output:

Filtered value: three
Mapped value: THREE
Filtered value: four
Mapped value: FOUR

In the above example, we output the intermediate value of stream to facilitate our debugging.

Why is it only used as debug? Let's take another example:

Stream.of("one","four").peek(u -> u.toUpperCase())
                .forEach(System.out::println);

In the above example, we use peek to convert element into upper case. Then output:

one
two
three
four

You can see that the elements in the stream are not converted to uppercase format.

Let's take another look at the map comparison:

Stream.of("one","four").map(u -> u.toUpperCase())
                .forEach(System.out::println);

Output:

ONE
TWO
THREE
FOUR

You can see that the map actually transforms the elements.

Of course, peek has exceptions. What if there is an object in our stream?

    @Data
    @AllArgsConstructor
    static class User{
        private String name;
    }
        List<User> userList=Stream.of(new User("a"),new User("b"),new User("c")).peek(u->u.setName("kkk")).collect(Collectors.toList());
        log.info("{}",userList);

Output results:

10:25:59.784 [main] INFO com.flydean.PeekUsage - [PeekUsage.User(name=kkk),PeekUsage.User(name=kkk),PeekUsage.User(name=kkk)]

We see that if it is an object, the actual result will be changed.

Why is peek different from map?

Let's look at the definitions of PEEK and map:

Stream<T> peek(Consumer<? super T> action)
<R> Stream<R> map(Function<? super T,? extends R> mapper);

Peek receives a consumer and map receives a function.

The consumer does not return a value. It only performs some operations on the elements in the stream, but the data after the operation is not returned to the stream, so the elements in the stream are still the original elements.

Function has a return value, which means that all operations on stream elements will be returned to stream as new results.

This is why peek string will not change, but peek object will send changes.

7. Exception handling in lambda expressions

Lambda expressions are introduced into Java 8. Lambda expressions can make our code more brief and business logic clearer. However, the functional interfaces used in lambda expressions do not handle exceptions well, because these functional interfaces provided by JDK usually do not throw exceptions, which means that we need to handle exceptions manually.

Because exceptions are divided into unchecked exception and checked exception, we will discuss them separately.

7.1 handling unchecked exception

Unchecked exception is also called runtimeException. RuntimeException usually occurs because there is a problem with our code. RuntimeException does not need to be caught. That is, if there is a runtimeException, it can also be compiled without capture.

Let's take an example:

List<Integer> integers = Arrays.asList(1,5);
        integers.forEach(i -> System.out.println(1 / i));

This example can be compiled successfully, but there is a problem above. If there is a 0 in the list, an arithmeticexception will be thrown.

Although this is an unchecked exception, we still want to handle it:

        integers.forEach(i -> {
            try {
                System.out.println(1 / i);
            } catch (ArithmeticException e) {
                System.err.println(
                        "Arithmetic Exception occured : " + e.getMessage());
            }
        });

In the above example, we use try and catch to handle exceptions, which is simple but destroys the best practice of lambda expressions. The code becomes bloated.

We move try and catch to a wrapper method:

    static Consumer<Integer> lambdaWrapper(Consumer<Integer> consumer) {
        return i -> {
            try {
                consumer.accept(i);
            } catch (ArithmeticException e) {
                System.err.println(
                        "Arithmetic Exception occured : " + e.getMessage());
            }
        };
    }

Then the original call becomes as follows:

integers.forEach(lambdaWrapper(i -> System.out.println(1 / i)));

However, the wrapper above is fixed to capture arithmeticexception, and we will adapt it into a more general class:

    static <T,E extends Exception> Consumer<T>
    consumerWrapperWithExceptionClass(Consumer<T> consumer,Class<E> clazz) {

        return i -> {
            try {
                consumer.accept(i);
            } catch (Exception ex) {
                try {
                    E exCast = clazz.cast(ex);
                    System.err.println(
                            "Exception occured : " + exCast.getMessage());
                } catch (ClassCastException ccEx) {
                    throw ex;
                }
            }
        };
    }

The above class passes in a class and casts it to an exception. If it can be cast, it will be handled. Otherwise, an exception will be thrown.

After this processing, we call:

integers.forEach(
                consumerWrapperWithExceptionClass(
                        i -> System.out.println(1 / i),ArithmeticException.class));

7.2 handling checked exception

Checked exception is an exception that must be handled. Let's take an example:

    static void throwIOException(Integer integer) throws IOException {
    }
List<Integer> integers = Arrays.asList(1,5);
        integers.forEach(i -> throwIOException(i));

We defined a method above to throw IOException, which is a checked exception and needs to be handled. Therefore, in the following foreach, the program will fail to compile because the corresponding exception is not handled.

The simplest way is to try and catch, as shown below:

        integers.forEach(i -> {
            try {
                throwIOException(i);
            } catch (IOException e) {
                throw new RuntimeException(e);
            }
        });

Of course, we have already mentioned the disadvantages of this method. Similarly, we can define a new wrapper method:

    static <T> Consumer<T> consumerWrapper(
            ThrowingConsumer<T,Exception> throwingConsumer) {

        return i -> {
            try {
                throwingConsumer.accept(i);
            } catch (Exception ex) {
                throw new RuntimeException(ex);
            }
        };
    }

We call:

integers.forEach(consumerWrapper(i -> throwIOException(i)));

We can also encapsulate the following exceptions:

static <T,E extends Exception> Consumer<T> consumerWrapperWithExceptionClass(
            ThrowingConsumer<T,E> throwingConsumer,Class<E> exceptionClass) {

        return i -> {
            try {
                throwingConsumer.accept(i);
            } catch (Exception ex) {
                try {
                    E exCast = exceptionClass.cast(ex);
                    System.err.println(
                            "Exception occured : " + exCast.getMessage());
                } catch (ClassCastException ccEx) {
                    throw new RuntimeException(ex);
                }
            }
        };
    }

Then call:

integers.forEach(consumerWrapperWithExceptionClass(
                i -> throwIOException(i),IOException.class));

8. Throw exception in stream

In the previous article, we mentioned that to handle exceptions in stream, you need to convert checked exception to unchecked exception.

We did this:

    static <T> Consumer<T> consumerWrapper(
            ThrowingConsumer<T,Exception> throwingConsumer) {

        return i -> {
            try {
                throwingConsumer.accept(i);
            } catch (Exception ex) {
                throw new RuntimeException(ex);
            }
        };
    }

The exception is caught and encapsulated as a runtimeException.

Encapsulating runtimeException always feels like there is a little problem, so is there a better way?

8.1 throw tips

We should all know the type inference of Java. If it is in this form, t will be considered as runtimeException!

Let's look at the following example:

public class RethrowException {

    public static <T extends Exception,R> R throwException(Exception t) throws T {
        throw (T) t; // just throw it,convert checked exception to unchecked exception
    }

}

In the above class, we define a throwexception method, which receives an exception parameter and converts it to t, where t is an unchecked exception.

Next, let's look at the specific usage:

@Slf4j
public class RethrowUsage {

    public static void main(String[] args) {
        try {
            throwIOException();
        } catch (IOException e) {
           log.error(e.getMessage(),e);
            RethrowException.throwException(e);
        }
    }

    static void throwIOException() throws IOException{
        throw new IOException("io exception");
    }
}

In the above example, we converted an IOException into an unchecked exception.

9. Usage of collectors in stream

In Java stream, we usually need to convert the processed stream into a collection class. At this time, we need to use stream Collect method. The collect method needs to pass in a collector type. It is still troublesome to implement the collector, and several interfaces need to be implemented.

So Java provides a simpler collector tool class to facilitate us to build collectors.

Next, we will explain the usage of collectors in detail.

Suppose we have two lists:

List<String> list = Arrays.asList("jack","bob","alice","mark");
List<String> duplicateList = Arrays.asList("jack","jack","mark");

The above one is a list without duplicates, and the other is a list with duplicates. In the following examples, we will use the above two lists to explain the usage of collectors.

9.1 Collectors. toList()

List<String> listResult = list.stream().collect(Collectors.toList());
        log.info("{}",listResult);

Convert stream to list. The list converted here is ArrayList. If you want to convert it to a specific list, you need to use the toCollection method.

9.2 Collectors. toSet()

Set<String> setResult = list.stream().collect(Collectors.toSet());
        log.info("{}",setResult);

Toset converts a stream to a set. HashSet is converted here. If you need to specify a set specifically, you need to use the toCollection method.

Because there are no duplicate elements in set, if we use duplicatelist to convert, we will find that there is only one jack in the final result.

Set<String> duplicateSetResult = duplicateList.stream().collect(Collectors.toSet());
        log.info("{}",duplicateSetResult);

9.3 Collectors. toCollection()

The togap and toset above are all of specific types. If we need to customize them, we can use toCollection ()

List<String> custListResult = list.stream().collect(Collectors.toCollection(LinkedList::new));
        log.info("{}",custListResult);

In the above example, we convert it to LinkedList.

9.4 Collectors. toMap()

Tomap receives two parameters. The first parameter is keymapper and the second parameter is valuemapper:

Map<String,Integer> mapResult = list.stream()
                .collect(Collectors.toMap(Function.identity(),String::length));
        log.info("{}",mapResult);

If there are duplicate values in the stream, the conversion will report IllegalStateException:

Map<String,Integer> duplicateMapResult = duplicateList.stream()
                .collect(Collectors.toMap(Function.identity(),String::length));

How to solve this problem? We can do this:

Map<String,Integer> duplicateMapResult2 = duplicateList.stream()
                .collect(Collectors.toMap(Function.identity(),String::length,(item,identicalItem) -> item));
        log.info("{}",duplicateMapResult2);

Add the third parameter mergefunction in the tomap to solve the conflict.

9.5 Collectors. collectingAndThen()

Collectingandthen allows us to do another operation on the generated collection.

List<String> collectAndThenResult = list.stream()
                .collect(Collectors.collectingAndThen(Collectors.toList(),l -> {return new ArrayList<>(l);}));
        log.info("{}",collectAndThenResult);

9.6 Collectors. joining()

Joining is used to connect elements in the stream:

String joinResult = list.stream().collect(Collectors.joining());
        log.info("{}",joinResult);
        String joinResult1 = list.stream().collect(Collectors.joining(" "));
        log.info("{}",joinResult1);
        String joinResult2 = list.stream().collect(Collectors.joining(" ","prefix","suffix"));
        log.info("{}",joinResult2);

You can take no parameters, one parameter or three parameters, which can be selected according to our needs.

9.7 Collectors. counting()

Counting is mainly used to count the number of elements in the stream:

Long countResult = list.stream().collect(Collectors.counting());
        log.info("{}",countResult);

9.8 Collectors. summarizingDouble/Long/Int()

Summarizingdouble / long / int generates statistics for the elements in the stream. The returned result is a statistics class:

IntSummaryStatistics intResult = list.stream()
                .collect(Collectors.summarizingInt(String::length));
        log.info("{}",intResult);

Output results:

22:22:35.238 [main] INFO com.flydean.CollectorUsage - IntSummaryStatistics{count=4,sum=16,min=3,average=4.000000,max=5}

9.9 Collectors. averagingDouble/Long/Int()

Averagedouble / long / int() averages the elements in the stream:

Double averageResult = list.stream().collect(Collectors.averagingInt(String::length));
        log.info("{}",averageResult);

9.10 Collectors. summingDouble/Long/Int()

Summingdouble / long / int() performs a sum operation on the elements in the stream:

Double summingResult = list.stream().collect(Collectors.summingDouble(String::length));
        log.info("{}",summingResult);

9.11 Collectors. maxBy()/minBy()

Maxby() / minby() returns the maximum or minimum value in the stream according to the provided comparator:

Optional<String> maxByResult = list.stream().collect(Collectors.maxBy(Comparator.naturalOrder()));
        log.info("{}",maxByResult);

9.12 Collectors. groupingBy()

Groupingby groups according to some properties and returns a map:

Map<Integer,Set<String>> groupByResult = list.stream()
                .collect(Collectors.groupingBy(String::length,Collectors.toSet()));
        log.info("{}",groupByResult);

9.13 Collectors. partitioningBy()

Partitioningby is a special groupingby. Partitioningby returns a map. This map takes the Boolean value as the key, which divides the stream into two parts. One part matches the partitionby condition, and the other part does not meet the condition:

 Map<Boolean,List<String>> partitionResult = list.stream()
                .collect(Collectors.partitioningBy(s -> s.length() > 3));
        log.info("{}",partitionResult);

See the operation results:

22:39:37.082 [main] INFO com.flydean.CollectorUsage - {false=[bob],true=[jack,alice,mark]}

The results are divided into two parts.

10. Create a custom collector

In the previous Java collectors article, we mentioned that the collect method of stream can call the tolist () or tomap () method in collectors to convert the result into a specific collection class.

Today, we will introduce how to customize a collector.

10.1 introduction to collector

Let's first look at the definition of collector:

The collector interface needs to implement five interfaces: supplier(), accumulator(), combiner(), finisher(), characteristics().

At the same time, the collector also provides two static of methods to facilitate us to create a collector instance.

We can see that the parameters of the two methods correspond to the interface to be implemented by the collector interface one by one.

These parameters are explained below:

Supplier is a function used to create a new variable collection. In other words, supplier is used to create an initial collection.

The accumulator defines an accumulator to add the original element to the collection.

Combiner is used to combine two sets into one.

Finisher converts the collection to the final collection type.

Characteristics represents the characteristics of the collection. This is not a required parameter.

Collector defines three parameter types. T is the type of input element, a is the cumulative type of reduction operation, that is, the initial type of supplier, and R is the final return type. Let's draw a diagram to see the conversion relationship between these types:

With these parameters, let's see how to use these parameters to construct a custom collector.

10.2 custom collector

We use the of method of the collector to create an invariant set:

    public static <T> Collector<T,Set<T>,Set<T>> toImmutableSet() {
        return Collector.of(HashSet::new,Set::add,(left,right) -> {
                    left.addAll(right);
                    return left;
                },Collections::unmodifiableSet);
    }

In the above example, we use HashSet:: new as the supplier and set:: add as the accumulator, and customize a method as the combiner. Finally, we use Collections:: unmodifiableset to convert the set into an immutable set.

In the above, we use HashSet:: new as the generation method of the initial set. In fact, the above method can be more general:

    public static <T,A extends Set<T>> Collector<T,A,Set<T>> toImmutableSet(
            supplier<A> supplier) {

        return Collector.of(
                supplier,Collections::unmodifiableSet);
    }

In the above method, we propose supplier as a parameter and define it externally.

Take a look at the tests of the above two methods:

    @Test
    public void toImmutableSetUsage(){
        Set<String> stringSet1=Stream.of("a","c","d")
                .collect(ImmutableSetCollector.toImmutableSet());
        log.info("{}",stringSet1);

        Set<String> stringSet2=Stream.of("a","d")
                .collect(ImmutableSetCollector.toImmutableSet(LinkedHashSet::new));
        log.info("{}",stringSet2);
    }

Output:

INFO com.flydean.ImmutableSetCollector - [a,b,c,d]
INFO com.flydean.ImmutableSetCollector - [a,d]

11. Detailed explanation and misunderstanding of stream reduce

The stream API provides some predefined reduce operations, such as count(), max(), min(), sum(). If we need to write the reduce logic ourselves, we can use the reduce method.

This paper will analyze the use of reduce method in detail and give specific examples.

11.1 detailed explanation of reduce

There are three kinds of reduce in the stream class, which accept 1 parameter, 2 parameters and 3 parameters respectively. First, let's look at a parameter:

Optional<T> reduce(BinaryOperator<T> accumulator);

This method accepts a binaryoperator parameter. Binaryoperator is a @ functionalinterface and needs to implement the following methods:

R apply(T t,U u);

The accumulator tells the reduce method how to accumulate the data in the stream.

for instance:

List<Integer> intList = Arrays.asList(1,3);
        Optional<Integer> result1=intList.stream().reduce(Integer::sum);
        log.info("{}",result1);

The output result of the above example:

com.flydean.ReduceUsage - Optional[6]

An example of a parameter is very simple. No more here.

Next, let's take another look at two examples of parameters:

T reduce(T identity,BinaryOperator<T> accumulator);

This method takes two parameters: identity and accumulator. There is one more parameter identity.

Maybe someone in some articles told you that identity is the initialization value of reduce, which can be specified arbitrarily, as shown below:

Integer result2=intList.stream().reduce(100,Integer::sum);
        log.info("{}",result2);

In the above example, the value we calculated is 106.

If we change the stream to parallel stream:

Integer result3=intList.parallelStream().reduce(100,result3);

The result is 306.

Why 306? Because in parallel computing, the initial cumulative value of each thread is 100, and the result of the last three threads is 306.

The results of parallel computing and non parallel computing are different. This is certainly not a problem with JDK. Let's take another look at the description of identity in JDK:

So it is wrong for us to pass in 100 here, because sum (100 + 1)! = 1.

Here, the identity of sum method can only be 0.

If we use 0 as the identity, the results calculated by stream and parallel stream are the same. This is the real intention of identity.

Let's look at the methods of three parameters:

<U> U reduce(U identity,BiFunction<U,? super T,U> accumulator,BinaryOperator<U> combiner);

Different from the previous method, a combiner is added, which is used to combine the results of multithreaded computing.

You may have noticed why the type of accumulator is bifunction and the type of combiner is binaryoperator?

public interface BinaryOperator<T> extends BiFunction<T,T,T>

Binaryoperator is a sub interface of bifunction. The apply method to be implemented is defined in bifunction.

In fact, the implementation of the lower level method of reduce only uses the apply method and does not use other methods in the interface, so I guess the difference here is just for simple distinction.

Although reduce is a common method, we must follow the specification of identity. Not all identities are appropriate.

12. Splitter in stream

Splitter is an interface introduced in Java 8. It is usually used with stream to traverse and segment sequences.

Splitter is required wherever stream is used, such as list, collection, IO channel, etc.

Let's first look at the definition of the stream method in the collection:

default Stream<E> stream() {
        return StreamSupport.stream(spliterator(),false);
    }
default Stream<E> parallelStream() {
        return StreamSupport.stream(spliterator(),true);
    }

We can see that both parallel and non parallel streams are constructed through streamsupport, and a splitter parameter needs to be passed in.

Well, after we know what the splitter does, let's take a look at its specific structure:

The splitter has four methods that must be implemented. We will explain them in detail next.

12.1 tryAdvance

Tryadvance is a method to process elements in the stream. If an element exists, it will be processed and returned true. Otherwise, it will return false.

If we don't want to process the subsequent elements of the stream, we can return false in tryadvance. Using this feature, we can interrupt the processing of the stream. This example will be discussed in a later article.

Trysplit attempts to split the existing stream, which is generally used in the case of parallel stream, because in a concurrent stream, we need to use multiple threads to process different elements of the stream. Trysplit is the method to split the elements in the stream.

Ideally, trysplit should split the stream into two equal parts to maximize performance.

12.3 estimateSize

Estimatesize indicates the elements to be processed in the splitter. It is generally different before and after trysplit. We will explain it in specific examples later.

12.4 characteristics

Characteristics represents the characteristics of the splitter. The splitter has eight characteristics:

public static final int ORDERED    = 0x00000010;//表示元素是有序的(每一次遍历结果相同)
public static final int DISTINCT   = 0x00000001;//表示元素不重复
public static final int SORTED     = 0x00000004;//表示元素是按一定规律进行排列(有指定比较器)
public static final int SIZED      = 0x00000040;//
表示大小是固定的
public static final int NONNULL    = 0x00000100;//表示没有null元素
public static final int IMMUTABLE  = 0x00000400;//表示元素不可变
public static final int CONCURRENT = 0x00001000;//表示迭代器可以多线程操作
public static final int SUBSIZED   = 0x00004000;//表示子Spliterators都具有SIZED特性

A splitter can have multiple features, and multiple features can be or operated to obtain the final characteristics.

12.5 for example

We discussed some key methods of splitter above. Now let's take a specific example:

@AllArgsConstructor
@Data
public class CustBook {
    private String name;

}

First define a custbook class and put a name variable in it.

Define a method to generate a custbook list:

    public static List<CustBook> generateElements() {
        return Stream.generate(() -> new CustBook("cust book"))
                .limit(1000)
                .collect(Collectors.toList());
    }

We define a call method, calling the tryAdvance method in the call method, and passing in our custom processing method. Here we modify the name of the book and attach additional information.

    public String call(Spliterator<CustBook> spliterator) {
        int current = 0;
        while (spliterator.tryAdvance(a -> a.setName("test name"
                .concat("- add new name")))) {
            current++;
        }

        return Thread.currentThread().getName() + ":" + current;
    }

Finally, write the test method:

    @Test
    public void useTrySplit(){
        Spliterator<CustBook> split1 = SpliteratorUsage.generateElements().spliterator();
        Spliterator<CustBook> split2 = split1.trySplit();

        log.info("before tryAdvance: {}",split1.estimateSize());
        log.info("characteristics {}",split1.characteristics());
        log.info(call(split1));
        log.info(call(split2));
        log.info("after tryAdvance {}",split1.estimateSize());
    }

The results of the operation are as follows:

23:10:08.852 [main] INFO com.flydean.SpliteratorUsage - before tryAdvance: 500
23:10:08.857 [main] INFO com.flydean.SpliteratorUsage - characteristics 16464
23:10:08.858 [main] INFO com.flydean.SpliteratorUsage - main:500
23:10:08.858 [main] INFO com.flydean.SpliteratorUsage - main:500
23:10:08.858 [main] INFO com.flydean.SpliteratorUsage - after tryAdvance 0

The list has 1000 pieces of data in total. After calling trysplit once, the list is divided into two parts with 500 pieces of data in each part.

Note that after the tryadvance call, the estimatesize changes to 0, indicating that all elements have been processed.

Let's take another look at this character = 16464, which is converted to hexadecimal: ox4050 = ordered or sized or subsized.

This is also the basic feature of ArrayList.

13. Foreach of break stream

We usually need to traverse and process the data in Java stream, and foreach is the most commonly used method.

But sometimes we don't want to process all the data, or sometimes the stream may be very long or infinite at all.

One method is to first filter out the data we need to process, and then foreach traversal.

So how do we break the stream directly? Today, this article focuses on this problem.

13.1 using splitter

In the previous article, when we talked about the splitter, we mentioned that in the tryadvance method, if false is returned, the splitter will stop processing subsequent elements.

Through this idea, we can create a custom splitter.

If we have such a stream:

Stream<Integer> ints = Stream.of(1,10);

We want to define an operation that stops when x > 5.

We define a general splitter:

public class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T>  {

    private Spliterator<T> splitr;
    private Predicate<T> predicate;
    private volatile boolean isMatched = true;

    public CustomSpliterator(Spliterator<T> splitr,Predicate<T> predicate) {
        super(splitr.estimateSize(),0);
        this.splitr = splitr;
        this.predicate = predicate;
    }

    @Override
    public synchronized boolean tryAdvance(Consumer<? super T> consumer) {
        boolean hadNext = splitr.tryAdvance(elem -> {
            if (predicate.test(elem) && isMatched) {
                consumer.accept(elem);
            } else {
                isMatched = false;
            }
        });
        return hadNext && isMatched;
    }
}

In the above class, predicate is the judgment condition we will pass in. We override tryadvance by Test (elem) adds a judgment condition to return false when the condition is not satisfied

See how to use:

@Slf4j
public class CustomSpliteratorUsage {

    public static <T> Stream<T> takeWhile(Stream<T> stream,Predicate<T> predicate) {
        CustomSpliterator<T> customSpliterator = new CustomSpliterator<>(stream.spliterator(),predicate);
        return StreamSupport.stream(customSpliterator,false);
    }

    public static void main(String[] args) {
        Stream<Integer> ints = Stream.of(1,10);
        List<Integer> result =
          takeWhile(ints,x -> x < 5 )
                        .collect(Collectors.toList());
        log.info(result.toString());
    }
}

We define a takeWhile method to receive stream and predicate conditions.

It will continue only when the predicate condition is met. Let's see the output results:

[main] INFO com.flydean.CustomSpliteratorUsage - [1,4]

13.2 custom foreach method

In addition to using the splitter, we can also customize the foreach method to use our own traversal logic:

public class CustomForEach {

    public static class Breaker {
        private volatile boolean shouldBreak = false;

        public void stop() {
            shouldBreak = true;
        }

        boolean get() {
            return shouldBreak;
        }
    }

    public static <T> void forEach(Stream<T> stream,BiConsumer<T,Breaker> consumer) {
        Spliterator<T> spliterator = stream.spliterator();
        boolean hadNext = true;
        Breaker breaker = new Breaker();

        while (hadNext && !breaker.get()) {
            hadNext = spliterator.tryAdvance(elem -> {
                consumer.accept(elem,breaker);
            });
        }
    }
}

In the above example, we introduced an external variable in foreach to determine whether to enter the splitter Tryadvance method.

See how to use:

@Slf4j
public class CustomForEachUsage {

    public static void main(String[] args) {
        Stream<Integer> ints = Stream.of(1,10);
        List<Integer> result = new ArrayList<>();
        CustomForEach.forEach(ints,(elem,breaker) -> {
            if (elem >= 5 ) {
                breaker.stop();
            } else {
                result.add(elem);
            }
        });
        log.info(result.toString());
    }
}

Above, we use the new foreach method and reset the judgment flag through the judgment conditions, so as to achieve the purpose of break stream.

14. Use of predicate chain

Predicate is a functional interface. The method represented needs to input a parameter and return Boolean type. It is usually used in the filter of stream to indicate whether the filtering conditions are met.

    boolean test(T t);

14.1 basic use

Let's first look at how to use predict in the stream filter:

    @Test
    public void basicUsage(){
        List<String> stringList=Stream.of("a","d").filter(s -> s.startsWith("a")).collect(Collectors.toList());
        log.info("{}",stringList);
    }

The above example is very basic, so I won't talk about it here.

14.2 using multiple filters

If we have multiple predicate conditions, we can use multiple filters to filter:

    public void multipleFilters(){
        List<String> stringList=Stream.of("a","ab","aac","ad").filter(s -> s.startsWith("a"))
                .filter(s -> s.length()>1)
                .collect(Collectors.toList());
        log.info("{}",stringList);
    }

In the above example, we added a filter and a predicate in the filter.

14.3 using composite predict

The definition of predicate is to enter a parameter and return a Boolean value. If there are multiple test conditions, we can combine them into one test method:

    @Test
    public void complexPredicate(){
        List<String> stringList=Stream.of("a","ad")
                .filter(s -> s.startsWith("a") &&  s.length()>1)
                .collect(Collectors.toList());
        log.info("{}",stringList);
    }

In the above example, we take s.startswith ("a") & & s.length() > 1 as the implementation of test.

14.4 combined predict

Although predicate is an interface, it has several default methods that can be used to realize the combination operation between predicates.

For example: predict and(),Predicate. Or (), and predicate negate()。

Let's look at their examples:

@Test
    public void combiningPredicate(){
        Predicate<String> predicate1 = s -> s.startsWith("a");
        Predicate<String> predicate2 =  s -> s.length() > 1;
        List<String> stringList1 = Stream.of("a","ad")
                .filter(predicate1.and(predicate2))
                .collect(Collectors.toList());
        log.info("{}",stringList1);

        List<String> stringList2 = Stream.of("a","ad")
                .filter(predicate1.or(predicate2))
                .collect(Collectors.toList());
        log.info("{}",stringList2);

        List<String> stringList3 = Stream.of("a","ad")
                .filter(predicate1.or(predicate2.negate()))
                .collect(Collectors.toList());
        log.info("{}",stringList3);

    }

In fact, we don't need to display a predicate in the assign. Any lambda expression that satisfies the predicate interface can be regarded as a predicate. And, or and negate operations can also be called:

List<String> stringList4 = Stream.of("a","ad")
                .filter(((Predicate<String>)a -> a.startsWith("a"))
                        .and(a -> a.length() > 1))
                .collect(Collectors.toList());
        log.info("{}",stringList4);

14.5 collection operation of predict

If we have a predicate set, we can use the reduce method to merge it:

@Test
    public void combiningPredicateCollection(){
        List<Predicate<String>> allPredicates = new ArrayList<>();
        allPredicates.add(a -> a.startsWith("a"));
        allPredicates.add(a -> a.length() > 1);

        List<String> stringList = Stream.of("a","ad")
                .filter(allPredicates.stream().reduce(x->true,Predicate::and))
                .collect(Collectors.toList());
        log.info("{}",stringList);
    }

In the above example, we call the reduce method to and the predicate in the collection.

In Java, we can convert a specific set into a stream. In some cases, such as in the test environment, we need to construct a stream with a certain number of elements. What should we do?

Here we can build an infinite stream and then call the limit method to limit the number of returns.

15.1 basic use

Let's take a look at one using stream An example of creating an infinite stream using iterate:

    @Test
    public void infiniteStream(){
        Stream<Integer> infiniteStream = Stream.iterate(0,i -> i + 1);
        List<Integer> collect = infiniteStream
                .limit(10)
                .collect(Collectors.toList());
        log.info("{}",collect);
    }

In the above example, we call stream The iterate method creates a 0, 1, 2, 3, 4 Infinite stream.

Then call limit (10) to get the first 10 of them. Finally, we call the collect method to transform it into a collection.

Look at the output:

INFO com.flydean.InfiniteStreamUsage - [0,9]

15.2 custom types

What should we do if we want to output a collection of custom types?

First, we define a custom type:

@Data
@AllArgsConstructor
public class IntegerWrapper {
    private Integer integer;
}

Then use stream Generate to create this custom type:

    public static IntegerWrapper generateCustType(){
        return new IntegerWrapper(new Random().nextInt(100));
    }

    @Test
    public void infiniteCustType(){
        supplier<IntegerWrapper> randomCustTypesupplier = InfiniteStreamUsage::generateCustType;
        Stream<IntegerWrapper> infiniteStreamOfCustType = Stream.generate(randomCustTypesupplier);

        List<IntegerWrapper> collect = infiniteStreamOfCustType
                .skip(10)
                .limit(10)
                .collect(Collectors.toList());
        log.info("{}",collect);
    }

Look at the output:

INFO com.flydean.InfiniteStreamUsage - [IntegerWrapper(integer=46),IntegerWrapper(integer=42),IntegerWrapper(integer=67),IntegerWrapper(integer=11),IntegerWrapper(integer=14),IntegerWrapper(integer=80),IntegerWrapper(integer=15),IntegerWrapper(integer=19),IntegerWrapper(integer=72),IntegerWrapper(integer=41)]

16. Customize thread pool of parallelstream

We mentioned earlier that the underlying layer of parallel stream uses forkjoinpool to submit tasks. By default, forkjoinpool creates a thread for each processor. If there is no special indication, parallel stream will use this shared thread pool to submit tasks.

So what should we do if we want to use a custom forkjoinpool in a specific case?

16.1 general operation

If we want to add from 1 to 1000, we can use parallel stream to do this:

List<Integer> integerList= IntStream.range(1,1000).@R_96_2419@ed().collect(Collectors.toList());
        ForkJoinPool customThreadPool = new ForkJoinPool(4);

        Integer total= integerList.parallelStream().reduce(0,total);

Output results:

INFO com.flydean.CustThreadPool - 499500

16.2 using custom forkjoinpool

The above example uses the shared thread pool. Let's see how to submit parallel streams using a custom thread pool:

List<Integer> integerList= IntStream.range(1,1000).@R_96_2419@ed().collect(Collectors.toList());

ForkJoinPool customThreadPool = new ForkJoinPool(4);
        Integer actualTotal = customThreadPool.submit(
                () -> integerList.parallelStream().reduce(0,Integer::sum)).get();
        log.info("{}",actualTotal);

In the above example, we defined a forkjoinpool with four threads and used it to submit the parallelstream.

Output results:

INFO com.flydean.CustThreadPool - 499500

If you do not want to use a common thread pool, you can use a custom forkjoinpool to commit.

17. Summary

This article introduces the use of stream and lambda expressions, covering all the small details of stream and lambda expressions. I hope you can like it.

Code for this article https://github.com/ddean2009/learn-java-streams/

PdfJava stream lambda all in one pdf

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>