Java Core Tutorial

In this section we will cover the basics of Java with clear and simple explanation. Here in this section we have covered concepts in the form of questions. Moreover, I am sure that these concepts will be beneficial to new developers & to those who are preparing for interviews. We will call this section ‘Java Core Tutorials’. They are like must to know concepts for every Java professional. In addition, we have a separate section for Java Interview Questions. Let’s start learning basics of Core Java in our topic ‘Java Core Tutorial’.

Q: What is the history of Java ? How popular it is ?

In the early nineties, Java was created by a team led by James Gosling for Sun Micro-systems (Now Oracle). It was originally designed for use on digital mobile devices, such as cell phones. However, when Java 1.0 was released to the public in 1996, its main focus shifted to use on the Internet. It provided more interactivity with users by giving developers a way to produce animated web pages . Over the years it has evolved as a successful language for use both on and off the Internet. A decade later, it’s still an extremely popular language, with over 6.5 million developers worldwide. In the last more than 25 years Java is accepted by the majority of clients as the best language to develop applications. Java features specially Java 8 features make Java more popular among its clients and users.

Q: What are the features of Java ?

Ans : There are 12 important buzzwords(features) in java.

  1. Simple : Easy to learn, most of the difficult features like pointers, multiple inheritance concepts have been removed from Java.
  2. Platform independent : Write once run anywhere.
  3. Architecture Neutral : If you perform any changes in your architecture of system, java program will still run without any issue.
  4. Portable : We can migrate java code from windows to Linux easily.
  5. Secure : Lets you creating applications that can’t be invaded from outside.
  6. Object Oriented : From JDK 1.8 onwards Java has also incorporated functional(procedural) programming.
  7. Multi-threaded : You can build applications with many concurrent threads of activity, resulting in highly responsive applications.
  8. Robust : Having reliable programming habits for creating highly reliable applications.
  9. Distributed : It facilitates users to create distributed applications in Java. To create distributed applications we use RMI, EJB, CORBA etc .
  10. Interpreted : Java integrates the power of Compiled Languages with the flexibility of Interpreted Languages.
  11. High Performance : Java code after compilation converts to bytecode which is highly optimized by the Java compiler, so that the Java virtual machine (JVM) can execute Java applications at full speed.
  12. Dynamic : It supports dynamic loading of classes. It means class loading happens on demand.

Q. How is Java a platform independent language?

Ans : In general the compiler’s job in a programming language is to convert source code into machine understandable code( also called executable code). There is one more state of code between source code & machine understandable code in java which is called the byte-code. In fact converting java source code to machine understandable code is two step process. One is from source code to byte-code(.class file) which is done by compiler. Other is from byte-code to machine understandable code which is done by the JVM. However JVM is a software which comes automatically with JDK installation.

JVM is platform dependent. Therefore while converting from byte-code to executable code JVM makes it compatible with the platform to which it belongs. In this complete process there are two translators which makes this possible ie. compiler & JVM. Compiler can read source file(.java file). Similarly JVM can read byte-code(.class file). Because of the feature of platform in-dependency Java’s slogan in Write Once Run Anywhere(WORA). Also below points are important to keep in mind.

Java Source Code - platform independent
Java Bytecode - platform independent
Java Compiler - platform independent
Java Executable Code - platform dependent
Java Virtual Machine - platform dependent
Java Software(jdk) - platform dependent
Java Program - platform independent
Java Software Application - platform independent

The languages where conversion of source code to machine language is a single process, those languages come under platform dependent category like C, C++.

Q. What is JRE, JDK, JVM in java and what are the differences between them?

Ans: First of all let’s understand what are the usage of these terms in java. JRE(Java Runtime Environment) helps us to run the compiled code in java whereas JDK(Java Development Kit) helps us to compile & run the code. Hence if you want to run already compiled code such as a code in the form of jar/war/ear, only JRE is sufficient. On the other hand if want to compile & then run your code, you must have JDK in your system. JVM(Java Virtual machine) helps us to convert compiled code(byte code) into machine language code. JRE contains JVM and JDK contains JRE. 

JRE = Java API + JVM
JDK = Compiler + JRE
JVM = Interpreter + JIT

JVM runs java program with the help of interpreter.
JIT (Just in time compiler) helps interpreter to run fast when its slow.
Java API carries predefined programs like String, System, Thread, Exception, FileInputStream, FileOutputStream, ArrayList, Collections etc….

Q. What is the JIT compiler?

Ans: When the program finishes its execution, the JIT compiler runs and compiles the code into a faster form without interpreting it. Furthermore, a standard compiler can’t access the dynamic runtime information, whereas the JIT compiler does and can make the better optimizations as well.

Q. What makes Java performant?

Ans: The Just-in-Time(JIT) compiler makes Java performant. Typically, it involves translating bytecode into machine code and then executing it directly instead of interpreting it. For enabling high performance, Java can make use of the Just-In-Time compilation. The JIT compiler is enabled by default in Java and gets activated as soon as a method is called. It then compiles the bytecode of the Java method into native machine code. After that, the JVM calls the compiled code directly instead of interpreting it. In this way, it makes Java performant.

Q. Where is Java being used?

Ans: Java is the most widely used language. The following are some application areas where we find Java usable:

1) Desktop applications
2) Web applications
3) Mobile applications (Android)
4) Cloud computing
5) Enterprise applications
6) Operating Systems
7) Web servers and application servers
8) Embedded systems
9) Computer games
10) Scientific applications
11) Smart cards
12) Cryptography

Q. What are the variables in Java and their types?

Ans: Variables are placeholders in java. In other words, Java stores the program data in variables.

They are of 4 types in Java:

1. Instance Variables (Non-Static Fields)

2. Class Variables (Static Fields)

3. Local Variables

4. Parameters

♥ Some people don’t consider parameters as a variable, but it comes under the type of variable as per Oracle java documentation.

Q. What is Data Types in Java?

Ans: In Java every variable is declared with the help of a data type. The data type of a variable determines the type of data the variable can contain, and what operations we can execute on it. Furthermore, there is a basic java data type for each variable in Java. There are two types of data types: Primitive Data Types, Non-primitive Data Types

Primitive Data Types

There are 8 types of primitive data types in Java:
int
float
char
boolean
byte
short
long
double

Non-Primitive Data Types

These are the special data types that are user-defined. They are also called reference data types. For example, classes, interfaces, String, arrays, etc. are the Non-primitive data types.

Note : ‘var’ is a case sensitive type identifier that helps the java compiler to infer the values, introduced in Java 10 version. However, it is not a keyword. We can only use them in case of local variables. Moreover, we can use var anywhere in java except in class name. You may visit the link for examples of var.

Q. What is the typecasting in Java?

Changing the data type of a variable is typecasting. In fact, it is not possible for the boolean data type. It is of two types: Implicit and Explicit

Implicit Typecasting: Changing from a smaller data type to the larger data type. The compiler does this process automatically.

Explicit Typecasting: Changing a larger data type into a smaller data type. This results in information loss:

Q. What are operators in Java?

Ans: Like any other language, Java also has some set of operators to perform specific programming operations.

There are the following types of Java operators;

1) Arithmetic Operators
2) Logical Operators
3) Unary Operators
4) Ternary Operators
5) Assignment Operators
6) Relational Operators
7) Bitwise Operators
8) Shift Operators
9) instanceOf operator

Q. What is comment in Java?

Ans: Comments improve code readability and understandability. Whenever we need to add documentation about the functionality of the code, we should add a comment. However, the Java compiler never reads the comments and simply ignores them during execution.

The comments are of two types:

Single-Line Comments 

As the name suggests, this comment contains a single line. Generally, we write it after a code line to explain its meaning. We represent the single-line comments with two backslashes(//). For example, observe the code below:

public class SingleLineComment {

   public static void main(String[] args) {

   //Initialising str with value: ”javatechonline.com”
   String str = “javatechonline.com
   }
}

Multi-Line Comments 

It contains multiple lines. Generally, we write to them at the beginning of the program as a short explanation of the program. Developers also use them to comment out blocks of code during debugging. We mark them using starting tag(/*) and an ending tag(*/).

public class MultiLineComment {

 public static void main(String[] args) {

 /* String str = "javatechonline"
 String str1 = str.concat(".com");
 System.out.println(str1);
 */

 }
}

Q. What is method in Java?

Ans: Methods are the most important part of a program in Java. This is the place where a programmer writes the logic to create a functionality of the program. In Java, generally there are two types of methods:

  • User-defined Methods: We can create our own method based on our requirements to implement a functionality.
  • Standard Library Methods: These are built-in methods in the Java library that we can use. In fact, sometimes we use these methods while writing a user-defined method.

Q. How to write a Java Method?

Ans: The simplest syntax to declare a method is:

returnType methodName() {
  // method body
}

Here,

Return Type: The return type specifies what type of value a method returns. For example, if a method has an int return type, then it returns an integer value. If the method does not return a value, its return type will be void.
Method Name: It is an identifier that we used to refer to the particular method in a program.
Method Body:  It includes the logic in the form of programming statements to perform some tasks. The method body is enclosed inside the curly braces { }.

Furthermore, the complete syntax of declaring a method is as below:

specifier static returnType nameOfMethod (parameter1, parameter2, ...) {
  // method body
}

In the above syntax,

specifier – It defines access types whether the method is public, private, protected, and default.

static This is also a modifier/specifier. There are several modifiers that may be part of a method declaration such as static, final, abstract, synchronized, strictfp, native.

parameter1/parameter2 – These are values passed to a method. We can pass multiple number of arguments to a method based on our requirement.

Q. What is String Pool in Java?

Ans: String pool is the set of strings stored in the heap memory. Whenever a new object is created, it is checked if it is already present in the String pool or not. If it is already present, then the same reference is returned to the variable else new object is created in the String pool, and the respective reference is returned. It is always advisable to create strings in the String pool whenever it is possible in order to save the memory.

Q. What is the concept of boxing, unboxing, autoboxing, and auto unboxing.

Answer: These are concepts of interchanging between primitive value and object.

Boxing: The concept of getting an object from the primitive value.
Unboxing: Getting the primitive value from the object.
Autoboxing: Assigning a value directly to an integer object.
Auto unboxing: Getting the primitive value directly into the integer object.

public class BoxingUnboxing {
   public static void main(String args[]){
      int k = 4;
      Integer i = new Integer(k);   //Boxing
      Integer j = k;                //Unboxing
      int x = j.intValue();         //Unboxing
      int y = j;                    //AutoUnboxing
   }
}

Explore other topics from the Links below:

Java Features Before Java8

Java 8 features

Functional Interfaces 

The Lambda (λ) Expression

Default Methods inside Interface

Static Methods inside Interface

Method Reference(::) 

Java Features After Java9

Java 14 Features

JVM ARCHITECTURE, Class Loaders & Java Code Processing

Spring Cloud Annotations With Examples

Introduction of new Annotations reduces the development efforts day by day. Needless to say, as a developer, we can’t think of the development of an enterprise level application without using annotations, especially in applications that use Spring or related frameworks. Furthermore, we come across the Spring Cloud framework when we develop a Microservices based application. Now-a-days, there is a high demand of Microservices based applications in the industry. Therefore, it becomes very crucial to know the annotations used in Spring Cloud. Hence, in this article, we are going to discuss ‘Spring Cloud Annotations With Examples’.

We can’t deny from the fact that the cloud is the future and, in the upcoming days, we will be seeing a lot of Java based applications deployed in the cloud. So, it’s better to learn and master the Spring Cloud. In the future it might become the standard framework to develop cloud-based Java applications. Let’s start discussing about our topic ‘Spring Cloud Annotations With Examples’ and the related concepts.

@EnableEurekaServer

When you develop a Microservices based project using Spring Boot & Spring Cloud, this will be the first annotation that you will apply to the main class of a microservice to make it a Eureka Server. As the name suggests, we want to enable Eureka Server. We will use this annotation in the context of Service Registry & Discovery. In the concept of Service Registry, every microservices registers itself with Eureka server. On the other hand, Service Discovery is the concept where one microservice discovers other microservice with the help of its entry in the Eureka server. In order to learn more about Eureka, kindly visit Netflix Eureka Service Registry & Discovery.

Moreover, on using Spring Cloud’s annotation @EnableEurekaServer, other microservices can register here and communicate with each other via service discovery. For example, in order to make your application/microservice acts as Eureka server, we need to apply @EnableEurekaServer at the main class of your application as shown below.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@EnableEurekaServer
@SpringBootApplication
public class SpringCloudEurekaServerApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudEurekaServerApplication.class, args);
    }
}

@EnableEurekaClient

This annotation relates to the concept of Service Discovery in Microservices. Using Service Discovery, one microservice can communicate with the other microservice via Eureka Server. Hence, other microservices who wants to register with Eureka Server & get discovered with the help of Eureka Server, become the candidate for this annotation.

In order to make your application/microservice acts as a Eureka discovery client, you need to apply @EnableEurekaClient at the main class of your application as shown below.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@EnableEurekaClient
@SpringBootApplication
public class SpringCloudPaymentServiceApplication {

   public static void main(String[] args) {
      SpringApplication.run(SpringCloudCartServiceApplication.class, args);
   }
}

@EnableFeignClients

In Microservices, communication among multiple services happens on the concept of producer-consumer. Moreover, the Consumer service will consume the services published by producer service. At the consumer side we apply this annotation. Feign is a declarative REST Client. It makes writing web service clients easier.

In order to get the features of OpenFeign, we will additionally need to apply @EnableFeignClients at the main class as below.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableEurekaClient
@EnableFeignClients
public class SpringCloudFeignStudentServiceApplication {

     public static void main(String[] args) {
        SpringApplication.run(SpringCloudFeignStudentServiceApplication.class, args);
     }
}

@FeignClient(name=”ApplicationName”)

This annotation comes with the annotation @EnableFeignClients. At Consumer’s Interface level, we need to apply @FeignClient annotation and provide the name of producer service/application. For example, below code snippet demonstrate the concept.

import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import com.dev.springcloud.feign.model.Book;

@FeignClient(name="BOOK-SERVICE")
public interface BookRestConsumer {

      @GetMapping("/book/data")
      public String getBookData();

      @GetMapping("/book/{id}")
      public Book getBookById(@PathVariable Integer id);

      @GetMapping("/book/all")
      public List<Book> getAllBooks();

      @GetMapping("/book/entity")
      public ResponseEntity<String> getEntityData();
}

Furthermore, in order to receive the data from Book service, we need to auto-wire the BookRestConsumer where it is required. Additionally, for more details on Feign Client related concepts, kindly visit a separate article on ‘How To Implement Feign Client In Spring Boot Microservices?‘.

@EnableConfigServer

Config Server is a central configuration server that provides configurations (properties) to each microservice connected to it. In order to make a Microservice acts as Config Server, we need to apply @EnableConfigServer annotation at the main class of the microservice. For example, below code snippet demonstrates the use of @EnableConfigServer annotation.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableConfigServer
public class SpringCloudConfigServerApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Furthermore, Spring Cloud Config Clients are the microservices that utilize the services provided by Spring Cloud Config Server. You may visit a separate detailed article on ‘How To Implement Spring Cloud Config Server In Microservices?‘ and the related concepts.

Annotations On Fault Tolerance provided by Resilience4j

In a context of Microservices, Fault Tolerance is a technique of tolerating a fault. A Microservice that tolerates the fault is known as Fault Tolerant. Moreover, a Microservice should be a fault tolerant in such a way that the entire application runs smoothly. There are possibilities of various kinds of faults after running a microservices based application. In order to implement this technique, the Resilience4j offers us a variety of modules based on the type of fault we want to tolerate. Here, we will be discussing the annotations used for different types of faults. For complete details on Resilience4j API, kindly visit our separate article on ‘How To Implement Fault Tolerance In Microservices Using Resilience4j?’.

@RateLimiter

Rate Limiter limits the number of requests for a given period. There are various reasons to limit the number of requests that an API can handle, such as protect the resources from spammers, minimize the overhead, meet a service level agreement and many others. We can achieve this functionality with the help of annotation @RateLimiter provided by Resilience4j without writing a code explicitly. For example, below code snippet demonstrates the functionality of @RateLimiter applied on a method.

@GetMapping("/getMessage")
@RateLimiter(name = "getMessageRateLimit", fallbackMethod = "getMessageFallBack")
public ResponseEntity<String> getMessage(@RequestParam(value="name", defaultValue = "Hello") String name){

    return ResponseEntity.ok().body("Message from getMessage() :" +name);
}

@Retry 

Suppose Microservice ‘A’  depends on another Microservice ‘B’. Let’s assume Microservice ‘B’ is a faulty service and its success rate is only upto 50-60%. However, fault may be due to any reason, such as service is unavailable, buggy service that sometimes responds and sometimes not, or an intermittent network failure etc. However, in this case, if Microservice ‘A’ retries to send request 2 to 3 times, the chances of getting response increases. We can achieve this functionality with the help of annotation @Retry provided by Resilience4j without writing a code explicitly. For example, below code snippet demonstrates the functionality of @RateLimiter applied on a method.

@GetMapping("/getInvoice")
@Retry(name = "getInvoiceRetry", fallbackMethod = "getInvoiceFallback") 
public String getInvoice() {
   logger.info("getInvoice() call starts here");
   ResponseEntity<String> entity= restTemplate.getForEntity("http://localhost:8080/invoice/rest/find/2", String.class);
   logger.info("Response :" + entity.getStatusCode());
   return entity.getBody();
}

@CircuitBreaker

As the name suggests, ‘Breaking the Circuit’. Suppose a Microservice ‘A’ is internally calling another Microservice ‘B’ and ‘B’ has some fault. Needless to say, in Microservice Architecture ‘A’ might be dependent on other Microservices and the same is true for Microservice ‘B’. In order to escape the multiple microservices from becoming erroneous as a result of cascading effect, we stop calling the faulty Microservice ‘B’. Instead, we call a dummy method that is called a ‘Fallback Method’. Therefore, calling a fallback method instead of an actual service due to a fault is called breaking the circuit.

Using Circuit Breaker we can eliminate the flow of failures to downstream/upstream. We can achieve this functionality easily with the help of annotation @CircuitBreaker without writing a specific code. For example, below code demonstrates the concept of @CircuitBreaker that is applied on a method.

@GetMapping("/getInvoice")
@CircuitBreaker(name = "getInvoiceCB", fallbackMethod = "getInvoiceFallback") 
public String getInvoice() { 
   logger.info("getInvoice() call starts here");
   ResponseEntity<String> entity= restTemplate.getForEntity("http://localhost:8080/invoice/rest/find/2", String.class);
   logger.info("Response :" + entity.getStatusCode());
   return entity.getBody();
}

@Bulkhead

In the context of the Fault Tolerance mechanism, if we want to limit the number of concurrent requests, we can use Bulkhead as an aspect. Using Bulkhead, we can limit the number of concurrent requests within a particular period. We can achieve this functionality easily with the help of annotation @Bulkhead without writing a specific code. For example, below code snippet demonstrates the usage of annotation @Bulkhead that is applied on a method.

@GetMapping("/getMessage")
@Bulkhead(name = "getMessageBH", fallbackMethod = "getMessageFallBack")
public ResponseEntity<String> getMessage(@RequestParam(value="name", defaultValue = "Hello") String name){

    return ResponseEntity.ok().body("Message from getMessage() :" +name);
}

@Timelimiter

Time Limiting is the process of setting a time limit for a Microservice to respond. Suppose Microservice ‘A’ sends a request to Microservice ‘B’, it sets a time limit for the Microservice ‘B’ to respond. If  Microservice ‘B’ doesn’t respond within that time limit, then it will be considered that it has some fault. We can achieve this functionality easily with the help of annotation @Timelimiter without writing a specific code. For example, below code demonstrates the usage of @Timelimiter.

@GetMapping("/getMessageTL")
@TimeLimiter(name = "getMessageTL")
public CompletableFuture<String> getMessage() {
   return CompletableFuture.supplyAsync(this::getResponse);
}

For complete examples of all Fault Tolerance annotations, kindly visit the separate article on ‘How To Implement Fault Tolerance In Microservices Using Resilience4j?’.

 

 

How To Add JDK 18 Support in Eclipse

Eclipse not recognising Java-18, How to add JDK 18 support in Eclipse 2022-03?, jdk18 IDE , jdk18 eclipse, eclipse jdk18, eclipse java 18, java 18 eclipse, eclipse java 18 support, eclipse for java 18, JDK 18 support in Eclipse 2022-03 – java

Oracle has released JDK 18 in March 2022 and it’s general availability date is 2022/03/22. JDK 18 will be a short-term feature release that is supported for six months. However, JDK 17 will be a long-term-support (LTS) release, with extended support from Oracle expected for around eight years. Every Java Professional should be curious to know what is new in Java 18. But, how will we test the new features of Java 18 practically in the early days? However, we have a good news for Eclipse users. In this article, we will learn ‘How to add JDK 18 support in Eclipse 2022-03 and other older version of eclipse as well. Thanks to Eclipse community as they provide solution for developers before every new release of JDK version.

In this article, we will learn at least two ways of configuring JDK 18 in our eclipse. Each way will have step by step instructions to do it easily. Let’s start our topic ‘How to add JDK 18 support in Eclipse?’.

Pre-requisite

Before knowing ‘How to add JDK 18 support in Eclipse?’, we need to be sure that the following software are installed in our system.

♦ JDK 18 (you can download JDK 18 from the link)
♦ Eclipse 2022-03: Eclipse IDE for Enterprise Java and Web Developers (You can download Eclipse 2022-03 from the link as per your operating system). If you have lower versions of eclipse also, you can download the features of JDK 18 via a plugin.

Where can we download Java JDK latest version from?

There is an official website to download latest version of JDK. Even we can get the early access of future JDK versions as well from the link. Since we are discussing about JDK 18 in this article, we need to download JDK 18 first. The direct link to download JDK 18 is https://jdk.java.net/18/.

How to install Java/JDK 18?

In order to install JDK 18, you need to download the zip file by clicking the JDK 18 download link according to your operating system. However, you can visit the oracle site and search for the latest build of JDK 18 to get it installed. Then, you need to unzip the zip file and paste the ‘jdk-18’ folder in the directory where other jdk versions exist. However, in my case, it is ‘C:\Program Files\Java’ which is the native place of jdk installation.

Why to use Eclipse 2022-03?

If you download Eclipse 2022-03, you will get the inbuilt feature till JDK 17. Furthermore, you just need to download and install a plugin to get the additional feature of JDK 18. On the other hand, if you already have lower versions of Eclipse in your system, then also you can download & install the specific plugin to get the features of JDK 18.

How to install Eclipse?

Eclipse community also recommends that we should have a specific version of Eclipse installed in our system in order to get the support of JDK 18. The specific version is Eclipse 2022-03. You can directly click on the link ‘Eclipse 2022-03 for Java EE developers‘ to get it. Once you click on the link you will be on the Eclipse download page, then you have to click on the download link appearing on the page or follow the updated steps mentioned there.

Steps to add JDK 18 support in Eclipse (eclipse jdk 18|eclipse java 18)

In order to add JDK 18 support in your eclipse, follow the below steps carefully. However, we are providing two ways to get it done. We also have included step by step images to have a clear understanding of the whole process.

Method#1: Using Drag & Drop facility provided by Eclipse Community

This is the easiest way to get JDK 18 feature in the eclipse. If this method doesn’t work for you, you may switch to below methods. Also, it is advisable that you should already have Eclipse 2022-03 in your System. Please follow below steps to get it done.

1) Open the link Java 18 Support for Eclipse 2022-03 (4.23). Hover your mouse on ‘Install’ button and you will see that it is draggable.

2) Open your eclipse

3) Drop it to the workspace of your eclipse and follow the further instructions. You may see one item ‘Java 18 Support for Eclipse 2022-03 (4.23)’. Select it and click on ‘Install’ button.

4) If you finished it without any error or warnings, you are done. Below screenshot demonstrates the steps.

Method#2: Using Install New Software

Let’s follow below steps for the first method ‘Using Install New Software’.

1) Make sure you already installed JDK 18 and Eclipse 2022-03 on your system. If not, kindly visit a section given above for the guidelines in this article.
2) Open your eclipse, click on Help-> Install New Software
3) In the first field(Work with) paste the below URL and hit Enter.
https://download.eclipse.org/eclipse/updates/4.23-P-builds/
4) Multiple values with checkboxes will appear. You have to select ‘Eclipse Java 18 support for 2022-03 development stream’ and then click on ‘Next’ button. A new window will appear with download progress. Click on ‘Finish’ to complete the process.
5) At the end, restart the eclipse. You may also be asked to restart the eclipse to get the new features.
6) Then you need to create one Test Project to check the JDK 18 support.
7) Now click on Project -> Properties. Then select ‘Project Facets’ from the left.
8) Click on ‘Java’ checkbox and then select ’18’ from the drop-down and then click on ‘Apply and Close’ button.

 

 

Method#3: Using Eclipse Marketplace

However, Eclipse 2022-03 comes with Eclipse Marketplace. If you don’t find Marketplace in any particular version, then you can refer our article ‘How to install marketplace in Eclipse‘. Let’s follow below steps for the method ‘Using Eclipse Marketplace’.

1) Make sure you already installed JDK 18 and Eclipse 2022-03 on your system. If not, kindly visit above section.
2) Open your eclipse, click on Help-> Eclipse Marketplace
3) In the search bar type ‘Java 18’ or ‘JDK 18’ and hit enter.
4) You will find one item ‘Java 18 Support for Eclipse 2022-03 (4.23)
5) Now click on ‘Install’ button
6) Next, you will see the next window with heading ‘Confirm Selected Features’. You have to click on ‘Confirm’ button here. Let the checkboxes be selected as it is.
7) Now, a popup window with the message ‘Restart Eclipse IDE to apply the software update’ will appear. Here, you have to click on ‘Restart now’ button.

8) Then, you need to create one Test Project to test the JDK 18 support.
9) Now, click on Project -> Properties. Then select ‘Project Facets’ from the left.
10) Click on ‘Java’ checkbox and then select ’18’ from the drop-down and then click on ‘Apply and Close’ button.

 

 

 

How to Verify that Eclipse has Java 18 Support in place?

1) Click on Project -> Properties. Then select ‘Java Compiler’ from the left.
2) Uncheck the second checkbox (Use Compliance from execution environment)
3) Now click on the drop down there on the right side. You will see an additional entry for JDK 18, which is ’18’ or ’18 (BETA)’
4) Select ’18’ and click on ‘Apply and close’.

♥ Finally, you are ready to develop a project in Eclipse using Java 18.

 

How to test JDK 18 features in Eclipse?

1) Create a new Java Test Project in Eclipse.
2) Right click on the project and navigate to Build Path -> Configure Build Path
3) Then, click on libraries
4) Now, add libraries from the right
5) Set the installation path for JDK 18
6) Finally, enjoy the testing of JDK 18 features.

That’s all about configuring & testing eclipse java 18| eclipse jdk 18 | eclipse java 18 support environment in your system.

Eclipse not recognzising Java-18

If your eclipse is unable to recognize the Java 18 version, you need to follow above steps of this article.

♠ Here is the link to get to know ‘How to enable JDK 17 support in Eclipse?

♥  Also, Here is the link to get to know ‘How to enable JDK 16 support in Eclipse?

♥ And here is the link to get to know ‘How to enable JDK 15 support in Eclipse?

 

How to Write Spring Boot Application Properties Files

If you are a Spring Boot developer, you must have come across the ‘application.properties’ or ‘application.yml’ file. Needless to say, these files reduces your development effort by minimizing the amount of XMLs that you were writing in a standard Spring project. Moreover, we accommodate the common properties of our project in the form of key-value pairs in these files. Therefore, it becomes more than important to know all about these files. These are application properties file with the extension of either ‘.properties’ or ‘.yml’. Hence, our topic for this article is ‘How to Write Spring Boot Application Properties Files’.

Furthermore, Spring Boot already provides some of the ready-made keys that we use most of the times in our project. Sometimes, we need to create custom properties files in order to fulfil the business requirements. So, we will discuss all about them in this article step by step. Additionally, if you are going to appear in any Spring Boot interview, you are expected to get at least one question from this article. Let’s go through our article ‘How to Write Spring Boot Application Properties Files’ and its related concepts step by step.

Table of Contents (Click on links below to navigate)

What is application.properties in Spring Boot?

In a Spring Boot Application, ‘application.properties’ is an input file to set the application up. Unlike the standard Spring framework configuration, this file is auto detected in a Spring Boot Application. It is placed inside “src/main/resources” directory. Therefore, we don’t need to specially register a PropertySource, or even provide a path to a property file.

An application.properties file stores various properties in key=value format. These properties are used to provide input to Spring container object, which behaves like one time input or static data.

What types of keys does an application.properties contain?

An application.properties file contains two types of keys : pre-defined keys, and programmer defined keys.

Pre-defined Keys

Spring framework provides several common default properties to specify inside application.properties such as to support database, email, JPA, Hibernate,  Logging, AOP etc. We can find the list of Common Application Properties from Spring Official Documentation.

However, we don’t need to define all the default properties every time. We can define what are required for our application at present. This is also a benefit of using Spring Boot where it reduces XML based configurations and customize them to simple properties.

Programmer-defined Keys

A programmer can also define project specific keys whenever required. We will discuss about how to define them in the following sections of this article.

Can we modify/rename file name application.properties in Spring Boot?

Yes we can, but it is not loaded by Spring Boot by default. By default, Spring Boot checks ‘application.properties’ under location ‘src/main/resources’. If we want to load other files, follow below steps:

1)  Create your custom file at the same location (‘src/main/resources’)
2) Apply @PropertySource annotation at starter/runner class and provide the name of your custom file as a parameter to it.
For example: @PropertySource(“classpath:xyz.properties”)
here, the classpath is ‘src/main/resource’ folder

Alternatively, we can also keep this file inside project folder (file:/ = project folder)

What are the valid locations of Properties files in the Project? How to load them?

Generally, there are two valid locations for Properties file:

1) src/main/resource (classpath:)
Syntax for this is: @PropertySource(“classpath:application.properties”)

2) Inside project folder ( file:/)
Syntax for this is:@PropertySource(“file:/application.properties”)

How to read keys of the properties file into the application?

In order to read values from keys, we have two ways:

Using @Value

This annotation is provided by the Spring Framework .To read one key from the properties file, we need to declare one variable in our class. Moreover, we need to apply the @Value annotation at the variable with the key as a parameter to it. It’s like @Value(“${key}”)

♥ At a time we can read one key value into one variable. For example, if we have 10 keys, then we should write @Value on top of 10 variables.

♥ If the key is not present in the properties file and we try to read using @Value() then it will throw an Exception:
IllegalArgumentException: Could not resolve placeholder ‘my.app.id’ in value “${my.app.id}”

Using @ConfigurationProperties

This is provided by Spring Boot. We will discuss about in detail in the subsequent sections.

How does the @Value work internally in Spring?

First ‘application.properties’ file is loaded using code like @PropertySource(“classpath:application.properties”). Now, Spring container creates one MEMORY “Environment” (key-val pairs) that holds all the key-vals given by Properties file. We can read those values into variable using Syntax
@Value(“${key}”). Then, @Value will search for key in Environment memory. If found then value is updated into container object.

How to write Custom Properties file in Spring Boot?

There are some certain steps to write our own properties file. Let’s follow below steps:

Step#1:  Define your properties file with any name with extension .properties

Right click on ‘src/main/resources’ > new > file > Enter name : product.properties
Additionally, Enter content as applicable. For example,

-----product.properties------
product.title=my product
product.version=2.5

Step#2: At Main class/Starter class, apply @PropertySource annotation
In our case, the Syntax will be: @PropertySource(“classpath:product.properties”)

Example: 

For Example, let’s assume we are going to define a ‘product.properties’

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. If you are new to Spring Boot, visit Internal Link to create a sample project in spring boot.

Step#2: Create a new properties file under src/main/resources folder

Let’s name it product.properties.

product.title=my product
product.version=2.5

Step#3: Create a class to read values from properties file

In order to read values from properties file, let’s create a class Product.java as below. Don’t forget to apply @Component annotation.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Component
public class Product {

    @Value("${product.title}")
    private String title;

    @Value("${product.version}")
    private Double version;

    @Override
    public String toString() {
       return "Product [title=" + title + ", version=" + version + "]";
    }
}

Step#4: Apply @PropertySource(“—“) at the main/starter class

In main or starter class whichever is applicable in your case, apply @PropertySource(“classpath:product.properties”) in order to tell Spring Container that you are using a custom properties file as below. Moreover, we need to print the product class to check the values.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.PropertySource;

@SpringBootApplication
@PropertySource("classpath:product.properties")
public class SpringBootPropertiesApplication {

    public static void main(String[] args) {

       ApplicationContext applicationContext = SpringApplication.run(SpringBootPropertiesApplication.class, args);
       Product product= applicationContext.getBean("product", Product.class);
       System.out.println(product);
    }
}

Step#4: Test the output

Right click on the project, Run As > Spring Boot App. Now check the output at console. You will see the below output.

Product [title=my product, version=2.5]

If a key is present in both application.properties and our custom properties file, then which one will be selected?

Key from ‘application.properties’ will always have a higher priority.

How can we create and load multiple properties files in the application?

It is absolutely possible to create & load multiple properties files in the application with the help of @PropertySource itself. Below is the syntax to use it in case of multiple properties.

@PropertySource({
"classpath:abc.properties",
"classpath:pqr.properties",
"classpath:xyz.properties",
"....",
"...."
})

If we define the same key with different values in different properties files, which one will be selected?

In this case, the last loaded properties file key will be considered.

For example, consider the below properties file declaration.
@PropertySource({
“classpath : abc.properties”,     //  loading order 1st
“classpath : xyz. properties”   //   loading order 2nd
})

From the above example, first it will check in ‘abc.properties’ for the given key, if the key also exists in ‘xyz.properties’, the values will be reflected from ‘xyz.properties’, as it is loaded at the last.
♦ if it doesn’t exist in both the files, priority is for ‘application.properties’.

How to map multiple properties with a single variable? When to use @ConfigurationProperties annotation?

As aforementioned, we can use @Value annotation to read only one property from the properties file at a time into our code. Spring core framework provides this feature. Suppose we have a requirement where we need to read multiple properties from properties file in our code. Here, we will make use of @ConfigurationProperties annotation. Furthermore, @ConfigurationProperties also supports working with Collections and Bean Type (class Type). Actually, we will have this kind of scenario multiple times in our project. Let’s be aware that Spring Boot provides the @ConfigurationProperties annotation.

What are the general guidelines/rules to use @ConfigurationProperties annotation?

1) The prefix provided inside the @ConfigurationProperties annotation must match with the prefix given in the properties file, else variable will not get the correct values.

2) We must generate getters and setters for the variables that we used to map with the keys of properties file.

Note: Since @Value uses refection based data read, hence getters & setters are not required in that case.

♥ Note: While applying @ConfigurationProperties annotation, you may get a warning “When using @ConfigurationProperties it is recommended to use ‘spring-boot-configuration-processor’ to your classpath to generate configuration metadata’. You can resolve it by clicking on the warning and selecting ‘Add to spring-boot-configuration-processor pom.xml’ as shown below in the screenshot. Alternative way to resolve it is that you can manually add the dependency of  ‘spring-boot-configuration-processor’ in your pom.xml file.

Example: How to read multiple values from the properties file in our code?

Let’s develop an example of reading multiple values from the properties file in our code. However, in this example we will consider the simplest case. Further, in the subsequent sections, we will cover the complex scenarios.

Step#1: Update the application.properties file 

For example, let’s consider below values. Here, each entry is in the form of ‘prefix.variable=value’. Obviously, ‘prefix.variable’ forms the key. Therefore, the last word of the key becomes the variable in our class. On the other hand, the remaining left side part becomes the prefix, which in turn becomes the parameter of @ConfigurationProperties annotation.

product.app.id=1024
product.app.code=QS5329D
product.app.version=3.34

Step#2: Create a Runner class and define variables

Let’s create a class and define variables. Here we are taking a Runner class just to test the example easily. Also, let’s follow below guidelines.

1) Apply annotation @ConfigurationProperties(“product.app”) on top of the class. Make sure that prefix as a parameter of this annotation must match with the prefix given in the keys of the properties file. In our case, ‘product.app’ is the prefix.

2) After prefix we have id, code & version respectively in the keys. Hence, these parts of the keys will become the variable names in our class as below.

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.PropertySource;
import org.springframework.stereotype.Component;

@Component
@ConfigurationProperties("product.app")
@PropertySource("classpath:product.properties")
public class ProductDataReader implements CommandLineRunner {

    private Integer id;
    private String code;
    private Double version;

    @Override
    public void run(String... args) throws Exception {
       System.out.println(this);
    }

    @Override
    public String toString() {
       return "ProductDataReader [id=" + id + ", code=" + code + ", version=" + version + "]";
    }

    public Integer getId() {
       return id;
    }

    public void setId(Integer id) {
       this.id = id;
    }

    public String getCode() {
       return code;
    }

    public void setCode(String code) {
       this.code = code;
    }

    public Double getVersion() {
       return version;
    }

    public void setVersion(Double version) {
       this.version = version;
    }
}

However, in this example, we have applied @PropertySource(“classpath:product.properties”) in Runner class. We can also apply it at the main class optionally.

Step#3: Check the Output

Now, run the application and check the output. It will be like below.

ProductDataReader [id=1024, code=QS5329D, version=3.34]

How to read multiple values from the properties file in our code as a Collection?

We need to follow some guidelines in order to read values from properties file in the form of a Collection in our code. Every developer knows how to define a collection variable in the class. But how can we define the respective values in the properties file is the point of discussion here. Let’s discuss it step by step as below.

List/Set/Array 

In case of List, Set and Array, we should use index that starts with zero as below.

prefix.variable[index]=value
For example: product.app.info[0]=InfoValue1,  product.app.info[1]=InfoValue2, …..and so on.

Map/Properties

Let’s understand a basic difference between Map & Properties first. Map (Interface) Generics Types are provided by Programmer and having multiple Implementations. Properties is a class that stores data with single format key=val (by default converted as String). Here, Map & Properties takes the same format key=value as shown below.

prefix.variable.key=value
For example: product.app.category.C1=CategoryValue1, product.app.category.C2=CategoryValue2, …..and so on.

Example

Step#1: Update the application.properties file 

Let’s update the product.properties file as per the guidelines given above.

product.app.info[0]=InfoValue1
product.app.info[1]=InfoValue2
product.app.info[2]=InfoValue3

product.app.category.C1=CategoryValue1
product.app.category.C2=CategoryValue2

Step#2: Create a Runner class and define variables

As we have already discussed why we are using a Runner class here, let’s jump to the example directly as below.

import java.util.List;
import java.util.Map;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.PropertySource;
import org.springframework.stereotype.Component;

@Component
@ConfigurationProperties("product.app")
@PropertySource("classpath:product.properties")
public class ProductDataReader implements CommandLineRunner {

    private List<String> info;
    //private Set<String> info;
    //private String[] info;

    private Map<String,String> category;
    //private Properties category;

    @Override
    public void run(String... args) throws Exception {
       System.out.println(this);
    }

    public List<String> getInfo() {
       return info;
    }

    public void setInfo(List<String> info) {
       this.info = info;
    }

    public Map<String, String> getCategory() {
       return category;
    }

    public void setCategory(Map<String, String> category) {
       this.category = category;
    }

    @Override
    public String toString() {
       return "ProductDataReader [info=" + info + ", category=" + category + "]";
    }
}

Step#3: Check the Output

Now, run the application and check the output. It will be like below.

ProductDataReader [info=[InfoValue1, InfoValue2, InfoValue3], category={C2=CategoryValue2, C1=CategoryValue1}]

How to read multiple values from the properties file in our code as an Object?

Instead of being a Collection type, suppose our variable is an instance of a particular type. In this case also, we need to follow some guidelines in order to read values from properties file. Let’s discuss it step by step as below.

We can even read properties data into one class object by using HAS-A relation and syntax is as below:
prefix.objectName.variable=value     OR       prefix.hasAVariable.variable=value

Step#1: Update the application.properties file 

Let’s update the product.properties file as per the guidelines given above.

product.app.brand.name=brandName
product.app.brand.price=4569.75

Step#2: Create a model class and define its properties 

In order to make a variable of a particular type(Class type), let’s create a model class & define some properties. To illustrate, let’s assume that we have a Brand of the Product. Therefore, we will create a Brand class as step by step as below.

1)  Create one class with variables(properties)
2)  Generate getters, setters, toString.
3)  Use this class as Data Type and create HAS-A variable(reference variable) in other class.

Note:  Do not apply @Component over this class type. This object is created only on condition: ‘If data exists in properties file’. This work is taken care by the annotation @ConfigurationProperties.

public class Brand {

    private String name;

    private Double price;

    public String getName() {
       return name;
    }

    public void setName(String name) {
       this.name = name;
    }

    public Double getPrice() {
       return price;
    }

    public void setPrice(Double price) {
       this.price = price;
    }

    @Override
    public String toString() {
       return "Brand [name=" + name + ", price=" + price + "]";
    }
}

Step#3: Create a Runner class and define variables

As we have already discussed why we are using a Runner class here, let’s jump to the example directly as below.

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.PropertySource;
import org.springframework.stereotype.Component;

@Component
@ConfigurationProperties("product.app")
@PropertySource("classpath:product.properties")
public class ProductDataReader implements CommandLineRunner {

    private Brand brand; //HAS-A 

    @Override
    public void run(String... args) throws Exception {
       System.out.println(this);
    }

    public Brand getBrand() {
       return brand;
    }

    public void setBrand(Brand brand) {
       this.brand = brand;
    }

    @Override
    public String toString() {
       return "ProductDataReader [brand=" + brand + "]";
    }
}

Step#4: Check the Output

Now, run the application and check the output. It will be like below.

ProductDataReader [brand=Brand [name=brandName, price=4569.75]]

What is YAML(.yml) file? 

YAML is a latest format to define key:val pairs without having duplicate words in more than one key. In other words, YAML is a convenient format for specifying hierarchical configuration data. It uses “Snake YAML” API and it is added by default in every Spring Boot Application. Moreover, it has following benefits for the developer.

1) It has a good look and feel and even easy to read.
2)  No duplicate words in levels/keys.
3)  Auto-Parsing supported using Snake-YAML API.
4)  Also, easy to write.

What are the rules/guidelines to write a YAML(.yml) file?

1) Unlike .properties file, dot(.) and equals(=) are not allowed, but only colon(:) is allowed.
2) Before each value in .yml file, we must provide exactly one blank space.
3) After every colon(:) symbols, go to new line/change line and provide spaces, if key is still incomplete.
4) Spaces count/indentation/alignment must be matching for the same level of key.
5) Do not provide any duplicate levels/hierarchy.

How to convert into YAML(.yml) file from a .properties file?

Generally, there are two ways to convert a .properties file into a .yml file: 1) Manually  2) With the help of IDE like STS.

Manually

1) Replace all occurrences of dot(.) and equals(=) with colon(:)
2) Follow rules/guidelines given in the above section.

If your file has lengthy content, it will take too much effort to convert.

Using STS

If you are using an IDE like STS(Spring Tool Suite), you can perform conversion in a single click as below.

Right click on .properties file and select option: Convert .properties to .yaml’ and finally click on ‘OK’. You will get a .yml file in less than a minute.

Example: Converting .properties into .yml

For example, Let’s convert our ‘product.properties’ into ‘product.yml’.

product.properties

product.app.id=1024
product.app.code=QS5329D
product.app.version=3.34

product.app.info[0]=InfoValue1
product.app.info[1]=InfoValue2
product.app.info[2]=InfoValue3

product.app.category.C1=CategoryValue1
product.app.category.C2=CategoryValue2

product.app.brand.name=brandName
product.app.brand.price=4569.75

Apply any one of the method of conversion given in above section. Your final ‘product.yml’ file will look like below.

product.yml

product:
  app:
  brand:
    name: brandName
    price: 4569.75
  category:
    C1: CategoryValue1
    C2: CategoryValue2
  code: QS5329D
  id: 1024
  info:
  - InfoValue1
  - InfoValue2
  - InfoValue3
  version: 3.34

—-

How To Implement Spring Cloud Config Server In Microservices

In a Microservices architecture based Java application, we have multiple microservices in the form of multiple Spring Boot applications. Each Spring Boot application will have its separate configuration file, where we specify the values in the form of key – value pairs. If you used maven in your application, you must have guessed that we are talking about nothing but application.properties file. You might have noticed that some of the entries in each application’s application.properties file are common, such as registering with Eureka server, Email, Security, JPA configurations etc. If we can keep these common entries in one central place and make them accessible by each application, it will make our development easier. Therefore, How can we make a central repository to accommodate all these common entries is the requirement of our discussion in this article. Hence our topic is ‘How To Implement Spring Cloud Config Server In Microservices’.

Generally, we create a space to accommodate our common configuration file in one of the most common repositories, such as Github, Gitlab, Bitbucket etc., provided by Git vendor. The location of this space is mapped into a central server called Config Server provided by Spring Cloud. Each microservice who will use the common configuration entries will have the Config Server location and act as a Config Client for that Config Server. We will discuss about all these in detail in this article. Let’s start discussing ‘How To Implement Spring Cloud Config Server In Microservices’ and its related concepts.

Table of Contents (Click on links below to navigate)

Software/Technologies Used in the Example

Sometimes some version conflicts with other version. Hence, listing down the combinations that are proven to be working with each other. Below is the proven combination of software that are used to develop these examples.

1) GitHub Repository
2) JDK 1.8 or later (Tested on JDK 11)
3) Spring Boot 2.6.6
4) Spring Framework 5.3.18
5) Spring Cloud 2021.0.1
6) Spring Cloud Config Server 3.1.1
7) Maven 4.0.0
8) IDE – STS 4.7.1.RELEASE
9) Postman v9.0.9

What is Spring Cloud Config Server?

Spring Cloud Config is a starter project to manage common configurations. It is integrated with Spring Boot and provided by Spring Cloud. When we create a Spring Boot Project and include the dependency of Spring Cloud Config in order to deal with configuration management, it acts as a dedicated Spring Cloud Config Server.

In other words, it acts as a central configuration server that provides configurations (properties) to each microservice connected to it. Moreover, it significantly simplifies the management of many microservices by centralizing their configuration in one location. Also, it provides the ability to refresh the configuration of a microservice without redeploying the configuration changes. We apply @EnableConfigServer annotation at the main class of the Application to recognize that this application will act as a Spring Cloud Config Server.

What is Spring Cloud Config Client?

Although, Spring Cloud Config Server is a popular term among microservices developers. On the other hand, we also have Spring Cloud Config Client which utilizes the services provided by Spring Cloud Config Server. Technically, these are microservices or Spring Boot Applications that take immediate advantage of the Spring Cloud Config Server. In order to connect with Config Server, we provide the entry of Config Server in the application.properties file of Config Client.

Why do we need a Spring Cloud Config Server?

Spring Boot is the most popular framework for developing microservices rapidly and smoothly. Using Spring Boot it’s easy to create even hundreds of microservices for a project. However, managing the configurations of all these services sometimes becomes a headache. As each service has its own configuration, it works smoothly for small, monolithic applications. But when we deal with hundreds of services and potentially thousands of configurations in microservices based applications, managing all of them can be a concern. Here Spring Cloud Config Server comes into the picture and makes a developer’s life easy by offering a common place for all common configurations.

What are the Advantages of Using Spring Cloud Config Server?

1) It promotes code reusability and removes code repetition. By using Spring Cloud Config Server, we have a centralized configuration. Therefore, we don’t need to write the same set of properties in multiple microservices. Instead, write them at one place and access from any microservice.

2) On the other hand, we don’t need to redeploy applications in case we modify any configuration.

What are the different ways to implement Config Server In Microservices?

Typically, there are two ways to implement Config Server in Microservices.

1) External Config Server

2) Native Config Server

External Config Server

In this approach, we use remote repository to accommodate our common config file such as Github, Gitlab, Bitbucket etc. This approach is used in production environment.

Native Config Server

In Native Config Server or Internal Config Server, we use local system to accommodate our common config file such as installing Git in a local system & connecting it with Config Server. This approach is generally used in development or testing environment.

Furthermore, steps for implementing both of them are similar, except the location entry in the Config Server. Here, we will discuss about the External Config Server approach which can be useful in a production environment.

How To Implement Spring Cloud Config Server In Microservices?

Till now, we are clear with What is Config Server?, Why do we need this? and How does it work?. Now, it’s time to dig into our article header ‘How To Implement Spring Cloud Config Server In Microservices’ with an example and be more confident on implementing it by applying a step by process.

Use Case Details

In order to implement Config Server on a live project, let’s consider a use case where we will have a minimum of three microservices. First microservice will be Eureka Server to register other microservices. Second microservice will be Config Server just like a mediator between a third party repository(Github, Gitlab Bitbucket etc.) and other microservices in the application. Furthermore, third microservice will be our general purpose service who will use the configuration values at run time. It will also act as a Config Client for aforesaid Config Server.

Create a Repository in GitHub

If you don’t have your account in GitHub, create it to accommodate your configuration file and access it remotely with the help of Config Server. You can use any remote repository like GitHub, GitLab, Bitbucket etc. Here in this example, we will use GitHub repository. In order to create a new repository in GitHub, follow below steps.

1) Register and Login to GitHub

2) Create a new repository e.g. config_repo

3) Create one file name e.g. application.properties. Enter some property in the form of key-value pair. For Example:  my.app.title=Sample App. Finally, commit the changes. Please note that we can create application.yml file also.

4) Copy URL: https://github.com/username/config_repo.git, username and password. We will require URL, username & password when we create the config server at the later stage in this article.

Create Microservice #1(Eureka Server)

First of all, we will create a Eureka Server Service in order to discover and communicate Microservices with each other. Please note that Eureka server itself acts as a Microservice. Moreover, it is nothing but just a Spring Boot Project that incorporates Spring Cloud’s Eureka Server dependency. In order to recognize that this is a Eureka Server, our main class will have @EnableEurekaServer annotation. Additionally, we will have some specific properties in application.properties file that will clearly indicate that this application/microservice is a Eureka Server. In order to create Eureka Server follow the below steps.

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. If you are new to Spring Boot, visit Internal Link to create a sample project in spring boot. While creating a project in STS, add starter ‘Eureka Server’ in order to get features of it.

Step #2: Apply Annotation @EnableEurekaServer at the main class

In order to make your application/microservice acts as Eureka server, you need to apply @EnableEurekaServer at the main class of your application.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class SpringCloudEurekaServerApplication {

   public static void main(String[] args) {
      SpringApplication.run(SpringCloudEurekaServerApplication.class, args);
   }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

Create Microservice #2(Config Server)

This is one of the important microservice of our article i.e. Config Server. We will provide dependency of Config Server as a starter project in our service. Let’s develop it step by step.

Step #1: Create a Spring Boot Project

As we are using STS(Spring Tool Suite) to create our Spring Boot Project. While creating a project in STS, add starter ‘Config Server’ in order to get all required features of it.

Step #2: Apply Annotation @EnableConfigServer at the main class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableConfigServer
public class SpringCloudConfigServerApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

# Server port
server.port=8888
# Repository Location in Github
spring.cloud.config.server.git.uri=https://github.com/username/config_repo.git
# Github username
spring.cloud.config.server.git.username=yourUserName
# Github Password
spring.cloud.config.server.git.password=yourPassword
# Github default branch
spring.cloud.config.server.git.default-label=main

Create Microservice #3(Microservice as a Config Client)

This service will act as a Config Client for the aforementioned Config Server. We will include config client as a dependent starter project in order to use the features of a Config Client. Let’s develop it step by step.

Step #1: Create a Spring Boot Project

As we are using STS(Spring Tool Suite) to create our Spring Boot Project. While creating a project in STS, add starter ‘Config Client’, ‘Spring web’, and ‘Eureka Discovery Client’ in order to get all required features of it.

Step #2: Apply Annotation @EnableEurekaClient at the main class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableEurekaClient
public class SpringCloudProductServiceApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

# port
server.port=9940

# serviceId (application-name)
spring.application.name=PRODUCT-SERVICE

# Eureka Location
eureka.client.service-url.defaultZone=http://localhost:8761/eureka

#Config Server location
spring.config.import=optional:configserver:http://localhost:8888

Note: In real time scenario, localhost will be replaced by actual server name or ip address as applicable.

Step #4: Create a RestContoller class as ProductRestController.java

In order to test if our Product Service is getting values from the remore repository or not, we will create a ProductRestController and define the endpoints as below.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/product")
public class ProductRestController {
   
   @Value("${app.title}")
   private String title;
 
   @GetMapping("/data")
   public ResponseEntity<String> showProductMsg() {
      return new ResponseEntity<String>("Value of title from Config Server: "+title, HttpStatus.OK);
   }
}

How to test Config Server Enabled Application?

It’s time to test our implemented concept of Config Server. Please follow below steps:

1) Start Service/Application containing Eureka Server

2) Start Application containing Config Server

3) Start Application containing Product Service

4) Open a browser window, hit below URLs and observe the results.

http://localhost:9940/product/data

If you are getting configuration properties values from Github repository, it means you are getting correct values and your implementation is also correct.

What if same key is present at microservice configuration file and Config Server with different values?

Sometime you may face a situation that by mistake you have some configuration properties with the same keys in both the places: in your microservices’ application.properties file and in Github repository which is connected to your Config server. In that case, Config Server takes the higher priority.

What is Spring Boot Actuator?

In a nutshell, Spring Boot Actuator is a starter project(sub-project) of the Spring Boot Framework. We also use this to monitor & get health checks of a production-ready application. Moreover, it offers various services as part of monitoring such as understanding the traffic, knowing the state of the database, knowing the application environment, gathering of metrices and many more. It uses HTTP endpoints to expose these operational information about any running application.

In order to enable Spring Boot actuator endpoints to our Spring Boot application, we need to add the Spring Boot Starter actuator dependency in our build configuration file i,e pom.xml(Maven Users). If you are using STS as an IDE to develop your project, you can select ‘Spring Boot Actuator’ as a starter dependency while creating your project. Even you can go to Edit Starters section of STS and add it if you have not added it while creating the project. Below is the dependency for pom.xml.

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

What is RefreshScope?

RefreshScope is a class under Spring Cloud that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. Spring Cloud offers @RefreshScope annotation to implement it in the project.

If we talk about implementation, it is nothing but an endpoint(/refresh) of the Spring Boot Actuator which helps to load the configuration properties value from the Config server. It is provided by Spring Cloud. Additionally, we need to add the @RefreshScope annotation to our main class of Spring Boot application.

How can we get updated values from Config Server to microservices without restarting the application?

If you are going to appear in an interview where microservices knowledge is required, this question may arise. Furthermore, you need to understand what the interviewer wants you to explain. Here, the interviewer wants to check if you have knowledge of refresh scope or not.

Let’s understand it programmatically how we can achieve this by following step by step process as below:

Step#1: Add Spring Boot Actuator dependency in your service (Spring Boot Application)

In our case, we will use it in our Product Service. We are using STS(Spring Tool Suite) to develop our project.

Go to Spring Boot Project → Right Click → Spring → Add Starters → search for ‘Actuator’ → Select ‘Spring Boot Actuator’ → Click on ‘OK’

Step#2: Update application.properties of your Application

Add below entry in your application.properties

     #Activate Spring Boot Actuator
     #management.endpoints.web.exposure.include=refresh
      management.endpoints.web.exposure.include=*

Sometimes refresh doesn’t work, hence include ‘*’ for safer side. If it works, no need to include ‘*’.

Step#3: Add @RefreshScope at the RestController of your microservice

In our case, we will add the @RefresgScope at ProductRestController class as below.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/product")
@RefreshScope
public class ProductRestController {
   
   @Value("${app.title}")
   private String title;
 
   @GetMapping("/data")
   public ResponseEntity<String> showProductMsg() {
      return new ResponseEntity<String>("Value of title from Config Server: "+title, HttpStatus.OK);
   }
}

How to test Config Server with Refresh Scope Enabled Application?

In order to test Refresh Scope, we will use POSTMAN tool. Let’s follow below process step by step.

Step#1: Open Postman tool

Step#2: Change the value of property in GitHub and commit the changes. In our example its ‘app.title’.

Step#3: Go to Postman tool, select method ‘POST’, enter URL ‘http://localhost:9940/actuator/refresh‘ and click on send button. You will receive 200 status code with some result as shown in the screen below.

Step#4: Open your browser and hit the actual URL(http://localhost:9940/product/data). If the browser with actual URL id already opened, just refresh the browser. You will see the updated value.

Conclusion

After going through all the theoretical & example part of ‘How To Implement Spring Cloud Config Server In Microservices’, finally, we should be able to implement Spring Cloud Config Server in Microservices. Similarly, we expect from you to further extend these examples and implement them in your project accordingly. In addition, If there is any update in the future, we will also update the article accordingly. Moreover, Feel free to provide your comments in the comments section below.

How To Implement Spring Cloud Gateway In Microservices

Microservices architecture offers us to deploy multiple services in different servers(hosts) in a private network. When a client request comes to microservices, it should get authenticated before the processing of request. Suppose we have 100 different services in a microservices based application. If client wants to interact with all of them, it will have to pass authentication 100 times. Reducing these many number of calls is one of the reasons to learn ‘How To Implement Spring Cloud Gateway In Microservices’.

If we have one service which takes care of authentication and also forward the request to concerned service, clients can get the response faster. Yes! We have such type of service which is responsible to do the same. It is nothing but Spring Cloud Gateway API. In other words, it acts as a mediator between client & service, that’s why called a Gateway. Apart from these tasks, what else it can do, we will discuss in the subsequent sections. Therefore, in this article we will discuss on ‘How To Implement Spring Cloud Gateway In Microservices’.

Table of Contents (Click on links below to navigate)

What is Spring Cloud Gateway?

Spring Cloud Gateway is a starter project provided by Spring Cloud. It provides an API Gateway built on top of the Spring Ecosystem, including: Spring 5.x, Spring Boot 2.x, Spring WebFlux and Project Reactor. Spring Cloud Gateway targets to offer a simple, yet effective way to route to APIs and provide cross cutting concerns to them such as: security, monitoring/metrics, and resiliency.

Spring Cloud Gateway works on the Netty server provided by Spring Boot and Spring Webflux. It does not work in a traditional Servlet Container like Tomcat. Therefore, when we start a Spring Boot Gateway application, it automatically starts with Netty server.

What is an API Gateway?

An API Gateway is a single entry & exit point to multiple microservices for an external application/client. Sometimes it is also called Edge Microservice. Since external clients are restricted to access the microservices directly, it acts as a mediator between external clients & the collection of microservices. In a real world scenario an external client could be: Desktop application, Mobile Application, Any third party app, External services etc.

Why do we need an API Gateway?

In a Microservice based Application, the different services are generally deployed on the different hosts/servers. In this case client needs to remember the host & port of the service for the interaction which is very tedious and also there are chances of security breach. Hence, API Gateway validates the authentication, uses its intelligence & routes the client request to the appropriate service to process accordingly. Apart from the security & routing, it helps in monitoring/metrics, and resiliency.

What are the advantages of an API Gateway?

1) External clients don’t need to pass through the authentication of each & every microservice. External client only needs to pass authentication of only one microservice at the beginning which is the API Gateway. If authentication gets rejected at API Gateway, the request will not proceed further to any other microservice. Therefore, it improves the security of the microservices as we limit the access of external calls to all our services.

2) We don’t need to expose the endpoints of microservices to the external application/client.

3) It provides an abstraction between microservices and the clients. The client does not know about the internal architecture of our microservices system. Client will not be able to determine the location of the microservice instances.

4) Since all client requests route through the API Gateway, we need to implement the cross cutting concerns like authentication, monitoring/metrics, and resiliency only in the API Gateway. In this way, it helps in minimizing development effort.

5) Needless to say, the API Gateway makes client interaction easy because it works as a single entry & exit point for all the requests.

What are the disadvantages of an API Gateway?

1) Since every request goes through API Gateway after a security check, it may slow down the performance.

2) Suppose we implement only one API gateway for the multiple microservices & if API Gateway fails, the request will not be processed further. Hence, we should implement multiple Gateways and manage the traffic via load balancer.

3) Sometimes we need to implement Gateway specific front-end. For example, we have three Different front-end such as Android client, iOS client, Web Client. All three may require some special APIs & configuration to work properly. To overcome this situation, we may require to create multiple types of API Gateways like one for each type of client. Some people call this pattern as ‘Back-end for Front-end’.

What are the terminologies used in API Gateway?

Generally, there are three basic terminologies used in the API Gateway: Route, Predicate, Filter

Route

The Route is a basic building block of the gateway. It is defined by an ID, a destination URI, a collection of predicates, and a collection of filters. A route is matched if the aggregate predicate is true. Basically, it represents the URL to which incoming request to be forwarded.

Predicate

The Predicate is nothing much more than a Java 8 Function Predicate. The input type is a Spring Framework ServerWebExchange. This lets you match on anything from the HTTP request, such as headers or parameters. In a nutshell, it contains the condition which should match to forward the incoming request to a particular Route URL.

Filter

These are instances of GatewayFilter that have been constructed with a specific factory. Here, you can modify requests and responses before or after sending the downstream request. In simple words, it offers us a provision to modify the incoming request before or after sending it to the Route URL.

What is Routing in API Gateway?

It is a process of identifying a Microservice based on the predicate and URL(Path) and execute it. Generally, there are two types of Routing: Static Routing and Dynamic Routing

Static Routing 

If a microservice has a single instance(also known as a direct call to microservice) is known as static routing. In this case API Gateway routes the request directly to the microservice.

Dynamic Routing 

Dynamic Routing comes into the picture when a microservice has multiple instances. In this case, API Gateway reaches Eureka, to get less load factor instance and then routes the request to the corresponding microservice. It internally generates Feign Client code based on the given configuration for load balancing.

How to Include Spring Cloud Gateway in your Project?

In order to implement the functionalities of Spring Cloud Gateway in your project, make use of the starter ‘Gateway’ in case of STS. You may also include Gateway dependency in your pom.xml as below.

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>

How to disable Spring Cloud Gateway from your Project?

If you include the starter, but you do not want the gateway to be enabled due to any reason, set below property in your application.properties file.

spring.cloud.gateway.enabled=false

Or else, comment the dependency of gateway starter in your pom.xml file.

How To Implement Spring Cloud Gateway In Microservices?

In order to implement the Spring Cloud Gateway in Microservices project, let’s assume a use case where we have 2 microservices, Eureka server and a Gateway Service. Please note that Gateway service is itself a microservice.

Details of the Use case 

Let’s assume we have 2 microservices: Order Service & Payment Service. Apart from that we will have a Gateway Service. Needless to say, in order to register & discover aforesaid microservices, we will have one Eureka Server. Please note that Gateway service must also be registered with the Eureka server in order to access other microservices details. We will also test the load balancing feature of API Gateway in case there are multiple instances of a particular service. Let’s start implementing our use case step by step.

Create Microservice #1(Eureka Server)

In order to discover and communicate Microservices with each other, we need to  create a Eureka Server Service. Creating a Eureka Server is itself similar to creating a Microservice. Moreover, it is just a Spring Boot Project that incorporates Spring Cloud’s Eureka Server dependency. In application.properties file we will have some specific properties that will indicate that this application/microservice is a Eureka server. In order to create Eureka Server follow the below steps.

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. If you are new to Spring Boot, visit Internal Link to create a sample project in spring boot. While creating a project in STS, add starter ‘Eureka Server’ in order to get features of it.

Step #2: Apply Annotation @EnableEurekaServer at the main class

In order to make your application/microservice acts as Eureka server, you need to apply @EnableEurekaServer at the main class of your application.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class SpringCloudEurekaServerApplication {

   public static void main(String[] args) {
      SpringApplication.run(SpringCloudEurekaServerApplication.class, args);
   }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

Create Microservice #2(Payment Service)

In our case, Product Service is an example of a microservice. Moreover, the product service will publish the REST endpoints. Let’s develop it step by step.

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. While creating a project in STS, add starter ‘Eureka Discovery Client’, ‘Spring Web’ in order to get all required features.

Step #2: Apply Annotation @EnableEurekaClient at the main class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableEurekaClient
public class SpringCloudFeignBookServiceApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

server.port=9009
#ServiceId
spring.application.name=PAYMENT-SERVICE
#Publish Application(Register with Eureka)
eureka.client.service-url.default-zone=http://localhost:8761/eureka
# instance id for eureka server
eureka.instance.instance-id=${spring.application.name}:${random.value}

Moreover, ‘eureka.client.service-url.default-zone=http://localhost:8761/eureka’ indicates that this service is getting registered with the Eureka Server.

Step #4: Create a RestContoller class as PaymentRestController.java

In order to publish Payment Service, we will create a PaymentRestController and define the endpoints as below. Please note that, here we are not using a database as our focus is on API Gateway functionality. Instead, we are using some hardcoded values just to illustrate an example wherever required.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/payment")
public class PaymentRestController {

    @Value("${server.port}")
    private String port;

    @GetMapping("/info")
    public ResponseEntity<String> showPaymentInfo() {
       return ResponseEntity.ok("FROM PAYMENT SERVICE, Port# is: " + port);
    }
}

Create Microservice #3(Order Service)

In our case, Order Service is an example of a microservice. Moreover, the order service will publish the REST endpoints. Let’s develop it step by step.

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. While creating a project in STS, add starter ‘Eureka Discovery Client’, ‘Spring Web’ in order to get all required features.

Step #2: Apply Annotation @EnableEurekaClient at the main class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableEurekaClient
public class SpringCloudFeignBookServiceApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

#port number
server.port=9560
# serviceId
spring.application.name=ORDER-SERVICE
# Eureka location
eureka.client.service-url.defaultZone=http://localhost:8761/eureka
# instance id for eureka server
eureka.instance.instance-id=${spring.application.name}:${random.value}

Moreover, ‘eureka.client.service-url.default-zone=http://localhost:8761/eureka’ indicates that this service is getting registered with the Eureka Server.

Step #4: Create a RestContoller class as OrderRestController.java

In order to publish Payment Service, we will create a PaymentRestController and define the endpoints as below. Please note that, here we are not using a database as our focus is on API Gateway functionality. Instead, we are using some hardcoded values just to illustrate an example wherever required.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/order")
public class OrderRestController {

    @Value("${server.port}")
    private String port;

    @GetMapping("/info")
    public ResponseEntity<String> showOrderInfo() {
       return ResponseEntity.ok("FROM ORDER SERVICE, Port# is: " + port);
    }
}

Create Microservice #4(API Gateway Service)

This is one of the most important services for our use case. Let’s develop it step by step.

Step #1: Create a Spring Boot Project

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. If you are new to Spring Boot, visit Internal Link to create a sample project in spring boot. While creating a project in STS, add starter ‘Eureka Discovery Client’, ‘Gateway’, ‘Spring Reactive Web’ in order to get all required features.

Step #2: Apply Annotation @EnableEurekaClient at the main class

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableEurekaClient
public class SpringCloudFeignBookServiceApplication {

    public static void main(String[] args) {
       SpringApplication.run(SpringCloudFeignBookServiceApplication.class, args);
    }
}

Step #3: Modify application.properties file

Add below properties in your application.properties file.

server.port=8080
spring.application.name=GATEWAY-SERVICE
eureka.client.service-url.defaultZone=http://localhost:8761/eureka

Moreover, ‘eureka.client.service-url.default-zone=http://localhost:8761/eureka’ indicates that this service is getting registered with the Eureka Server.

Step #4: Create a Configuration class as SpringCloudGatewayRouting.java

This is the most important class for our implementation. It incorporates the configuration code for API Gateway as below.

import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SpringCloudGatewayRouting {

    @Bean
    public RouteLocator configureRoute(RouteLocatorBuilder builder) {
       return builder.routes()
      .route("paymentId", r->r.path("/payment/**").uri("http://localhost:9009")) //static routing
      .route("orderId", r->r.path("/order/**").uri("lb://ORDER-SERVICE")) //dynamic routing
      .build();
    }
}

Please note that there will be one route configuration per microservice as shown above. It is clear from the code that payment service has only one instance so direct URL is provided. On the other hand, Order service has multiple instances, hence load balancing is applied.

How to test API Gateway Enabled Microservice?

It’s time to test our API Gateway enabled Microservice. Please follow below steps:

1) Start Service/Application containing Eureka Server

2) Start Payment Application

3) Start Order Application, change the port number in application.properties and start the application again to create a second instance of it. Repeat the process to create third instance if you want and so on.

4) Start the API Gateway service. Observe that Netty server will be started rather than Tomcat.
5) Open a browser window, hit below URLs and observe the results.

http://localhost:8080/payment/info
http://localhost:8080/order/info

If you are getting expected results, it means you are calling microservices from API Gateway and your implementation is correct.

Conclusion

After going through all the theoretical & example part of ‘How to Implement Spring Cloud Gateway in Microservices’, finally, we should be able to implement API Gateway in Microservices. Similarly, we expect from you to further extend these examples and implement them in your project accordingly. In addition, If there is any update in the future, we will also update the article accordingly. Moreover, Feel free to provide your comments in the comments section below.

How to Design a Java Enterprise Application Effectively?

This article is intended to experienced Java EE developers who desire to become the Architects of Enterprise applications in Java. It is also useful to the developers, designers, architects who want to create effective blueprints of applications. The title “How to Design a Java Enterprise Application Effectively?” will cover a lot of new demands with faster delivery in an enterprise application as moving fast is one of the most important criteria of today’s IT companies.

Moreover, we will discuss some best practices to design an enterprise application that suits the current demands. We will also cover the design comparison of Monolith with Microservices based application.

Motivation to adapt Design as per Modern Applications

Adapting design as per modern applications is all about quickly adapting to the needs of the current market and customers. If a new feature is desired or looks promising by the customer, how fast we can deliver it to the customer?  What are all possibilities to adapt modern tools, technologies, frameworks etc. so that there is automated quality control in place that ensures everything will work as expected. Also, what are the solutions we are going to provide in the design so that it does not break the existing functionality of an already developed application?

In Modern software development, reliability, automation, continuous delivery, robustness, portability, minimal maintenance are some of the most important principles. These are preconditions to hand over a good software design. A reliable, automated process not only gives on to speedy delivery, but sooner or later a higher quality as well. Let’s learn how to Design a Java Enterprise Application Effectively step by step.

Prerequisite before creating an Application Design

Before creating a design of application, we should know the purpose of the application that we want to develop. The motivations and purposes of the enterprise systems need to be crystal clear before going into technology details. Otherwise, the software is just being developed for the sake of developing functionalities. We should ensure that the software to be built will achieve the business needs. Here, we will discuss the whole process of designing the application step by step. The sequence of steps may differ. Below is the summary of steps that we will cover in detail in this article.

1) Knowing the purpose of the application
2) Designing Application Structure
3) Finalizing Technology & Framework
4) Choosing Front-end Technology
5) Structuring Layers of the Application
6) Designing Business-driven Modules
7) Designing the Packages & Sub-Packages
8) Packaging the classes (Binaries)
9) Finalizing the Build Tools
10) Deciding the Version Control System

Knowing the complete purpose of the Application

Before developing a software application we should ask ourself some questions:

1) Why to develop this Application?

2) What is the purpose of application?

3) What is the problem that this software is going to solve?

4) Would it return some revenue? If yes, directly by selling the application or indirectly by supporting or providing services to customers?

5) Why does it require to spend time and effort to develop a solution?

Every application needs genuine answers to these questions before we spend time and effort into it.

Designing Application Structure

In Monolithic Java projects, We can bundle the components & responsibilities into a certain structure called Java Packages and Project Modules respectively. Structuring the application in this way is necessary to deliver a good application architecture. It helps developers understanding the application and its responsibilities quickly. By grouping the coherent components that form logical features into logical packages or project modules, we can increase cohesion and accomplish a better management of the source code. Java SE 9 introduces the grouping of components as modules also known as Java 9 modules. These modules are similar to the JAR files with the facility to declare dependencies on other modules with usage restrictions. The smallest unit of a Java Application is a Java class. Ideally, we should keep classes loosely coupled and highly cohesive.

In Microservices based Java Projects, we consider a module as a single microservice or application. We should follow the same package structure as it is in a Monolithic application.

Single vs Multi-module Projects- Monolithic Architecture

For small applications Single module is justifiable whereas for a big application such as enterprise application, it is advisable to go with multi-module design. The more complex our software project becomes the longer. It will take to compile and package it into artifacts, so it will directly impact build-time performance. Using multi-module practice, we will have the possibility of reusing certain sub-modules in other projects. eg. a general practice is to design a common utility module that contains several helper classes. We can package this module as a JAR file and re-use as a dependency in other projects. One important point here to note is that shared modules should be as less dependent as possible or at least contain only stable dependencies.

Benefits of Using Microservices Architecture

Rather than using Monolith architecture, if we use Microservices Architecture, we don’t need to do much exercise on deciding modules & their dependencies. Each module will compile in less time and even provide the best build-time performance. Additionally, it provides the easy reusability of other modules.

Finalizing Technology & Framework

Once the application’s purpose and functionality is clear to all the stakeholders, we can keep our focus on the technological aspects. We should favor technologies that can implement the business use cases fairly and also minimize the amount of work and overhead. Developers should be able to focus on the business use cases, but not on the framework and technology. A Designer should choose the frameworks that can support solving business problems intelligently. We should choose the technology that can also support rich development workflows as much as possible. This not only incorporates automation and speedy development turnarounds, but also the ability to accept modern infrastructure, such as Linux boxes. Decision of Technology & Framework can be the turning point in how to Design a Java Enterprise Application Effectively.

Now-a-days, Spring Boot is the most popular among Java developers & projects. It is suitable in Monolithic as well as Microservices based projects.

Choosing Front-end Technology

Although It doesn’t matter which specific front-end technology is chosen, enterprise projects now have more advanced front-ends than in the past. However, It is always a recommendation that separate the front-end into a dedicated project module. The reason is that the front-end may have different life cycles than the rest of other modules and we can ship it individually.

If we have planned deployment of front-end with back-end as a single unit, we require more coordination between teams. On the other hand, this integration isn’t undoubtedly helpful during development if the cycles of developing the front-end differs from the back-end. A developer currently working on the front-end side may not want to build the back-end part each and every time.

Moreover, It is also a recommendation to package the front-end module as an independent module and to introduce it as a dependency of the back-end module. By doing so, we can build the front-end module separately, whereas a back-end developer can rebuild the back-end part as well by using their latest version of the front-end. Therefore, we can reduce the time taken in the build process on both the sides. Angular & React is becoming more popular & demanding in the market day by day, so we can consider them the best options for a Java application. Also, they are the well qualified and suitable front-end technologies for a java developer.

In Microservices based applications, front-end is always kept independent from the back-end. The most common way to call back-end services by Front-end is using the REST calls.

Structuring Layers of the Application

The structure of standard Java Enterprise Application is usually a three-tier architecture. It technically means three inspired layers: Presentation, Business and Data Layer. The primary idea of layering the architecture is to separate concerns from the data layer, from the business layer, and both of them from the presentation layers, as well. Each technically inspired layer or module has its own internal dependencies, that should not be used from the other layers. For example, only the data layer should be able to use the database, business layer should not invoke the database directly. Also, in future if database technology changes(Suppose from SQL server to PostgreSql), there should not be any negative impact on other layers. Similarly, if presentation technology changes, there should not be any negative impact on other layers.

Designing Business-driven Modules

The first thing that we must focus on is business needs & concerns. We should reflect these aspects in the project and module structure as well. Our domain should clearly be reflected in the application structure. Just by looking at the hierarchy of package names should give a clear idea of what the application is trying to do. Therefore, incorporating the business needs & concerns first and other implementation details second is the best approach to start with the modules design. For example, an E-commerce application should incorporate modules for users, articles, payments, orders, shipping etc. Therefore, we should consider them as the base domain modules. We should also represent them as the base Java packages in our application.

Needless to mention, in Microservices based application, each module will work as a separate application.

Designing the Packages & Sub-Packages

Suppose we are going to define packages for our Users module. In the users package we can have sub-packages such as controller, core, model, data and so on. By following this perspective, we can break up responsibilities inside the users package by their technical groupings. Also, all the other modules and packages in the project should have similar sub-packages, depending on their contents. We should assign one of the sub-packages to be the technical entry point, let’s say controller. This package should contain the communication endpoints launching the use case logic and represent as entry point outside of the application. These are applicable for Microservices based application as well.

Packaging the classes (Binaries)

The binaries are eventually the end results of the development and build process. Only these binaries can be executed as an application. So, a Java application will have to be deployed as some kind of binary artifacts. Conventionally, we package the Java source code in the form of JAR, WAR, or EAR files. These files stand for Java Archive, Web Application Archive & Enterprise Application Archive respectively. Moreover, these files contain all classes and files required to ship an application, framework dependency, or library. The Java Virtual Machine (JVM) in the end will execute the bytecode (.class files) from the archives. As per the complexity of the application, we should decide which archive file needs to be created for packaging the application.

No doubt, Microservices based application saves the time in packaging the classes as they are small in size. But, here we will more than one JARs/WARs.

Finalizing Build Tools

We make changes in our java source files multiple times during the development. Therefore, it is necessary to have a software which can make our code compilation process as fast as possible. The primary responsibility of the build tool is to compile java source file into bytecode(.class file). Apart from that, build tools help us to recognize all the sub-module dependencies. It also helps us in packaging all compiled classes and their dependencies into a deployment artifacts. These artifacts are packaged as JAR,WAR or EAR depending on the nature of the application. Apache Maven & Gradle are the most powerful tools in this category in modern days and Maven is the most popular among the majority of java developers. Apache Ant is out of date now.

Deciding the Version Control System

We have a lot of options for version control systems, such as Git, Subversion(SVN), Mercurial or CVS. Now-a days CVS & SVN are rarely used. The distributed revision control system, specially Git, has extensively accepted as the modern tools in the last many years. All software projects need coordination of concurrent code changes, made by different developers. Hence Version control systems become mandatory to coordinate, track, and follow the changes in software systems.

Summary

This is all about how to Design a Java Enterprise Application Effectively. We should design a Java Enterprise application with the priority to resolve business complications rather than to technology-driven solutions. The business use cases are what in the long run will generate profits for the company. If the number of classes per module is small, we can design a module as a single Java package module. For more complex modules, it is advisable to add another hierarchical layer using patterns.

If you are interested to learn Design patterns, kindly visit our article on “Java Design Patterns

MCQ on Spring and Hibernate

Details on this page are a little bit different than at other pages. Rather than going through some theoretical or practical way of learning Spring & Hibernate frameworks, we will test our understanding of Spring & hibernate via solving multiple choice questions (MCQs). In this page, we will talk about ‘MCQ on Spring and Hibernate’.

Answer & Explanation of each question are given at the bottom of this page. 

You will find these questions for the first time on the web as they are created after putting a strong effort. You may also find these questions in some other websites/blogs after publishing this page. Someone who likes the questions & interested in copying the questions can just provide a reference of this page as a link rather than copying the whole questions. Examiners, Test Conductors are welcome to utilize these questions if they do not put them into any website/blog.

Table of Contents (Click on links below to navigate)

MCQ on Spring and Hibernate

Q1. Robert is working on a web application where Spring is being used. He wants to apply prototype scope in the bean named as ‘Person’. Which one of the following is the correct option where the ‘Person’ bean works within the prototype scope :

(A) <bean id="person" class="com.test.Person"  scope="Prototype"/>

(B) <bean id="person" class="com.test.Person"  scope="prototype"/>

(C) <bean id="person" class="com.test.Person"  PrototypeScope="true"/>

(D) <bean id="person" class="com.test.Person"  Scope="prototype"/>

Q2. Spring framework offers us multiple ways to define the scope of a bean. Which one of the following is not the correct way of defining a ‘request’ scope?

(A) @RequestScope
    public class Product {
          // some properties & methods
    }

(B) <bean id="product" class="com.test.Product" Scope="request"/>

(C) @Scope("request")
    public class Product {
          // some properties & methods
    }

(D) <bean id="product" class="com.test.Product" scope="request"/>

Q3. Select the option that is not a correct approach to create bean life cycle methods in Spring framework?

(A) By using Annotations 

(B) By implementing Interfaces provided by Spring framework

(C) By extending classes provided by Spring framework

(D) By using XML configuration

Q4. Which interface out of the following options will you use to perform destruction of beans in the context of the life cycle methods?

(A) InitializingBean

(B) PostConstruct

(C) DisposableBean

(D) PreDestroy

Q5. Who is capable of maintaining a registry of different beans and their dependencies in a Spring based web application?

(A) BeanFactory class

(B) BeanFactory interface

(C) beanFactory method

(D) Client Application

Q6. Select the incorrect statement about BeanFactory in Spring Framework?

(A) BeanFactory holds bean definitions

(B) BeanFactory instantiates beans whenever asked by the client application

(C) BeanFactory is capable of creating associations between dependent objects while instantiating them

(D) BeanFactory supports the Annotation-based dependency Injection

Q7. Four annotations given below, are used in Spring Boot based application. Which one is the annotation of Spring Boot that is an alternative to Spring’s standard @Configuration annotation?

(A) @EnableAutoConfiguration

(B) @SpringBootConfiguration

(C) @ConfigurationProperties

(D) @ConfigurationPropertiesScan

Q8. What is the purpose of @SpringBootConfiguration annotation in a Spring Boot web application?

(A) enables registration of extra beans in the context

(B) scans on the package where the application is located

(C) enables Spring Boot’s auto-configuration mechanism

(D) disables additional configuration classes from the application

Q9. Johnson is a developer. He is working in a Hibernate based application. He defines a class, called ‘InvoiceId’. The fields ‘category’ and ‘name’ will represent a unique InvoiceId as below:

@Embeddable
public class InvoiceId implements Serializable {

    private String category;

    private String name;

    // getters and setters // equals() and hashCose()
}

Which of the following option represents that the ‘Invoice’ entity has a composite key?

(A) @Entity 
    public class Invoice { 
       @Id 
       @Embedded 
       private InvoiceId id; 
       private String description; 
       private Double amount; 
       //standard getters and setters 
    }

(B) @Entity 
    public class Invoice { 
       @EmbeddedId 
       private InvoiceId id; 
       private String description; 
       private Double amount; 
       //standard getters and setters 
    }

(C) @Entity 
    public class Invoice { 
       @Id 
       private InvoiceId id; 
       private String description; 
       private Double amount; 
       //standard getters and setters 
    }

(D) @Entity 
    public class Invoice { 
       @EmbeddableId 
       private InvoiceId id; 
       private String description; 
       private Double amount; 
       //standard getters and setters 
    }

Q10. Suppose you are working on a Hibernate-based application. You have two classes as below. You want to create a table into the database using hibernate ORM concept.

public class Employee { 
   private int id; 
   private Name name; 
   private double salary; 
   // getters & setters 
} 

public class Name { 
   private String firstName; 
   private String lastName; 
   //getters & setters 
} 

You want to create a table named ’employee’ with 4 columns: Id, firstName, lastName, salary. Which option is correct to create the aforesaid table. Assume that table & column names are not case-sensitive.

(A) public class Employee { 
       @Id 
       private int id; 
       private Name name; 
       private double salary; 
       // getters & setters 
    } 

    @Embeddable 
    public class Name {
       private String firstName; 
       private String lastName; 
       // getters & setters 
    }

(B) @Entity 
    public class Employee { 
       @Id 
       private int id; 
       @Embedded 
       private Name name; 
       private double salary; 
       // getters & setters 
    } 

    @Embedded 
    public class Name { 
       private String firstName; 
       private String lastName; 
       // getters & setters 
    }

(C) @Entity 
    public class Employee { 
       @Id 
       private int id;
       @Embedded 
       private Name name; 
       private double salary; 
       // getters & setters 
    } 

    @Entity 
    public class Name {
       private String firstName;
       private String lastName; 
       // getters & setters
    }

(D) @Entity 
    public class Employee {
       @Id 
       private int id; 
       @Embeddable 
       private Name name; 
       private double salary; 
       // getters & setters 
    } 

    @Entity 
    public class Name {
       private String firstName; 
       private String lastName; 
       // getters & setters 
    }

Q11. Mary is working in a Hibernate based application as a web-application developer. She has two entities in her application: Student & Address. She is implementing a relation where One student can have multiple addresses and one address can accommodate multiple students in a bidirectional relationship. Mary doesn’t want hibernate to create one additional table. Which of the following annotation and its attribute will she use to fulfill the given requirement?

(A) @ManyToMany and mappedBy

(B) @ManyToOne and mappedBy

(C) @OneToMany and MappedBy

(D) @OneToOne and MappedBy

Q12. In the context of Access types in Hibernate, if @Id is located at the getter method of an entity, what does it mean?

(A) Entity access behavior is Field Access

(B) Entity access behavior is Property Access

(C) It represents both field & property access behavior

(D) From the given information, we can't determine the Access behavior

Q13. Suppose you are working on a web application that uses Hibernate as an ORM tool. At which level/(s) @Access is allowed to be used in an Entity?

(A) Field Level

(B) Method Level, Field Level

(C) Method Level

(D) Field Level, Method Level, Class level

Q14. In JPA, which one of the following annotation converts the date and time values from Java object to compatible database type and retrieves back to the application.

(A) @Timestamp

(B) @Time

(C) @Temporal

(D) @Date

Q15. Jacob is working on a web application where JPA is being used in data layer. He wrote a POJO class and wants this class to work as an entity to implement the ORM concept. What are the minimum required annotations he must use at the class to create a table in the database?

(A) @Id, @Entity

(B) @Table, @Entity, @Id

(C) @Column, @Entity, @Table 

(D) @Id, @GeneratedValue

Q16. Consider the following bean definition :

 <bean id="test" class="com.dev.test.TestBean" /> 

Which of the below bean scope is applied in this bean?

(A) Session

(B) Request

(C) Prototype

(D) Singleton

Q17. Robin is working in a Spring based web application. He declares the scope of a bean by using annotations. Select the option which is incorrect in declaring the scope.

(A) @Component 
    @Scope("request") 
    public class Product { 
     //some methods and properties 
    }

(B) @Component 
    @Scope("session") 
    public class Product { 
     //some methods and properties 
    }

(C) @Component 
    @Sessionscope 
    public class Product {
     //some methods and properties 
    }

(D) @Component 
    @SessionScope 
    public class Product { 
     //some methods and properties
    }

Q18. Suppose you are working on a Spring based Web Application. Generally, there are three ways to implement a core feature of Spring framework in your application. Below are the three ways:

1) Using Annotation    2) Using XML configuration    3) Implementing Interfaces provided by Spring framework

Which option is correct if you want to define Bean Lifecycle methods in your project:

(A) Only Option (1) 

(B) Option (1) & Option (2) 

(C) Option (2) & Option (3) 

(D) All three 

Q19. Which option is incorrect about @Component annotation?

(A) It scans our application for classes annotated with @Component

(B) It instantiates classes whenever required

(C) It supports Spring's auto-detection mechanism

(D) It doesn't inject any specified dependencies

Q20. Melissa works in an application where Spring is being used. She wrote a custom function in a class that handles bean life cycle disposal. By mistake, She also implemented the DisposableBean interface and overrides the default method provided by the Spring container in the same class. What will happen if she runs the application?

(A) Only default method will be called

(B) Only custom method will be called

(C) First default method and then custom method will be called

(D) First custom method and then default method will be called

Q21. There are two classes ‘Product’ and ‘ProductCategory’ in a Spring based Project. You want to inject ProductCategory into Product class by means of constructor injection. Which of the below option is correct to implement it?

(A) @Component 
    public class Product { 
        private int id; 
        @Autowired 
        private ProductCategory category; 
        public Product(ProductCategory category) { 
           this.category = category;
        } 
        public ProductCategory getCategory() { 
           return category; 
        } 
        public void setCategory(ProductCategory category) {
           this.category = category; 
        } 
    }

(B) @Component 
    public class Product { 
       private int id; 
       private ProductCategory category; 
       public Product(ProductCategory category) { 
          this.category = category; 
       } 
       public ProductCategory getCategory() {
          return category;
       } 
       @Autowired 
       public void setCategory(ProductCategory category) {
          this.category = category; 
       } 
   }

(C) @Component 
    public class Product { 
       private int id; 
       private ProductCategory category;
       @Autowired 
       public Product(ProductCategory category) {
          this.category = category; 
       }
       public ProductCategory getCategory() {
          return category; 
       } 
       public void setCategory(ProductCategory category) { 
          this.category = category;
       } 
   }

(D) @Component 
    public class Product { 
       private int id; 
       private ProductCategory category;
       public Product(ProductCategory category) {
          this.category = category; 
       } 
       @Autowired 
       public ProductCategory getCategory() {
          return category; 
       } 
       public void setCategory(ProductCategory category) { 
          this.category = category;
       }
   }

Q22. Suppose you are working on a Student library system which is developed in the Spring framework. There are four classes as shown below.

   @Component 
   public class Student {
      private int id; 
      private Address address; 
   } 
   @Component 
   public interface Address {
     // some fields & Methods 
   }
   @Component 
   public class PermanentAddress implements Address { 
     // some fields & Methods 
   } 
   @Component 
   public class MailingAddress implements Address {
     // some fields & Methods 
   }

You want to inject dependency of PermanentAddress class into Student class. Which option represents the correct use of dependency injection?

(A) @Component 
    public class Student { 
        private int id; 
        @Autowired 
        private Address address; 
    }

(B) @Component 
    public class Student { 
       private int id;
       @Autowired 
       @Qualifier("PermanentAddress")   
       private Address address; 
    }

(C) @Component 
    public class Student { 
       private int id; 
       @Autowired 
       @Qualifier 
       private Address address; 
    }

(D) @Component 
    public class Student {
       private int id;
       @Autowired 
       @Qualifier("permanentAddress") 
       private Address address; 
    }

Q23. Which option is true for the role of BeanFactory in Spring Framework:

(A) BeanFactory provides the configuration framework and basic functionality

(B) BeanFactory provides access to messages in i18n-style

(C) BeanFactory provides access to resources, such as URLs and files

(D) BeanFactory provides event propagation to beans implementing the ApplicationListener interface

Q24. Select the option which is true about BeanFactory & ApplicationContext in the Spring Framework:

(A) Any description of ApplicationContext capabilities and behavior should be considered to apply to BeanFactory as well.

(B) An ApplicationContext is a complete superset of a BeanFactory

(C) BeanFactory builds on top of the ApplicationContext. 

(D) When building most applications in a J2EE-environment, the best option is to use the BeanFactory, since it offers all the features of the ApplicationContext.

Q25. @SpringBootApplication annotation is a combination of three annotations. Which one of the given option is not the part of a combination of three annotations?

(A) @SpringBootConfiguration

(B) @EnableAutoConfiguration

(C) @Configuration

(D) @ComponentScan

Q26. Consider the below code where the Product entity has Category as @EmbeddedId and other fields related to a product. A Category tells Hibernate that the Product entity has a composite key.

@Entity 
public class Product implements Serializable { 
    @EmbeddedId 
    private Category id; 
    private String name; 
    //getters & setters 
} 

What will be the structure of the Person entity in the context of the above composite key?

(A) @Embedded
    public class Category implements Serializable {
       private String name; 
       private String description; 
       //getters & setters 
    }

(B) @Component 
    public class Category implements Serializable { 
       private String name; 
       private String description; 
       //getters & setters 
    }

(C) @Entity 
    public class Category implements Serializable { 
       private String name; 
       private String description; 
       //getters & setters 
    }

(D) @Embeddable  
    public class Category implements Serializable {  
       private String name; 
       private String description; 
       //getters & setters 
    }

Q27. In a Hibernate based application, in order to interact with the database, we make use of various methods provided by EntityManager API. Which statement is false in the context of these methods?

(A) We can make use of the persist() method in order to have an object associated with the EntityManager.

(B) We can use the findReference() method, if we just need the reference to the entity.

(C) We can use the detach() method to detach an entity from the persistence context.

(D) We can use the find() method to retrieve an object from the database.

Q28. In order to access entity attributes, we implement access strategies in Hibernate. Which type/(s) of Access strategy/(ies) is allowed in Hibernate?

(A) field-based access

(B) property-based access

(C) class-based access

(D) field-based access & property-based access

Q29. Sophia is using Spring Data JPA for the data layer implementations in her web application. She wrote a custom method in the repository. She is using ‘update’ query to implement the functionality of the method. Which one of the following annotation she should use on that method?

(A) @Updating

(B) @Updated

(C) @Modifying

(D) @Modified

Q30. Which option is incorrect about Spring Framework?

(A) It is a lightweight framework.

(B) Spring applications are loosely coupled because of dependency injection.

(C) It offers only Java-based annotations for configuration options.

(D) It provides declarative support for caching, validation, transaction, and formatting.

 

You may comment your score in the comment section.

Answers & Explanations

A1. B is correct. Option ‘A’ and ‘D’ has syntactical errors. Option ‘C’ has PrototypeScope which is not a valid attribute. Hence B is correct.

A2. B is correct. Option B has incorrect attribute ‘Scope’. It should be ‘scope’. All Other options are correct.

A3. C is correct. Spring doesn’t provide any class to extend in order to create life cycle methods, but it provides interfaces.

A4. C is correct. Spring allows your bean to perform destroy callback method by implementing the DisposableBean interface. Option B & D are the annotations, not the interface.

A5. B is correct. A BeanFactory is the interface which is capable of maintaining a registry of different beans and their dependencies.

A6. D is correct. BeanFactory does not support the Annotation-based dependency injection, whereas ApplicationContext, a superset of BeanFactory does.

A7. B is correct. @SpringBootConfiguration is an alternative to Spring’s standard @Configuration annotation.

A8. A is correct. @SpringBootConfiguration enables registration of extra beans in the context or the import of additional configuration classes.

A9. B is correct. @EmbeddedId is the correct annotation to represent that an entity has a composite key.

A10. C is correct. In Option A , @Entity is missing in Employee class. In Option B, @Embedded is disallowed at class level and In Option D, @Embeddable is disallowed at field level.

A11. C is correct. As per the problem statement, only @OneToMany or @MantToOne can be used. Hence, Options A & D are incorrect. MappedBy attribute is disallowed in @ManyToOne, that makes Option B also incorrect. Hence, option C is correct.

A12. B is correct. If we apply @Id on the getter method of an entity/class, it means property access behavior is enabled. If we apply @Id on field, it means field access behavior is enabled.

A13. D is correct. @Access can be used at all three levels : Field, Method & Class.

A14. C is correct. @Temporal annotation converts the date and time values from Java object to compatible database type and retrieves back to the application.

A15. A is correct. In order to create a table in the database, @Entity & @Id annotations are mandatory, otherwise JPA will throw an exception.

A16. D is correct. A bean has singleton scope by default, whether we declare it in the bean definition or not.

A17. C is correct. Option C has incorrect annotation. It should be ‘@SessionScope’ in place of ‘@Sessionscope’. All other options are correct.

A18. D is correct. There are three ways to define bean life cycle methods: Annotation or XML configuration or Spring Interfaces.

A19. D is correct. @Component annotation also injects any specified dependencies.

A20. C is correct. Default methods provided by Spring always take precedence in a bean life cycle.

A21. C is correct. @Autowired annotation will be applied on constructor as the question is talking about constructor injection.

A22. D is correct. Spring container by default considers bean name same as the class name, but with the first letter in lower case. In order to get more idea on @Qualifier, visit the article on @Qualifier Annotation.

A23. A is correct. Options B, C, D are the roles of ApplicationContext, not the BeanFactory.

A24. B is correct. According to Spring Framework documentation, option B is correct. For more details, kindly refer the Spring Official Documentation.

A25. A is correct. @SpringBootConfiguration is not the part of this combination, but the @Configuration is. For more details, visit @SpringBootApplication annotation.

A26. D is correct. We represent a composite primary key in Hibernate by using the @Embeddable annotation on a class.

A27. B is correct. If we just need the reference to the entity, we can use the getReference() method, not the findReference().

A28. D is correct. There are two types of access strategies allowed in Hibernate : field-based access and property-based access.

A29. C is correct. Only @Modifying annotation is available. All others are incorrect.

A30. C is  correct. Apart from Java-based annotations, it also offers XML based configuration options. Hence option ‘C’ is the right answer.

 

Spring Boot Cassandra CRUD Examples

In continuation to Cassandra DB installation, now in this article we will learn the most important DB operations. If we are in Software development world, we should at least know ‘How to write CRUD operation on the database front’. Needless to say, how much importance these operations have in an application development. If you are developing any web application in any programming language, you can’t escape from these operations. CRUD is nothing but an abbreviation to Create, Read, Update and Delete. Moreover, development of CRUD operations is expected from all developers. We will learn ‘Spring Boot Cassandra CRUD Examples’ in this article.

Software/Technologies Used in the Examples

Sometimes some version conflicts with other version. Hence, listing down the combinations that are proven to be working with each other. Below is the proven combination of software that are used to develop these examples.

1) Apache Cassandra 3.11.12 (download Cassandra)
2) Python 2.7 (download Python)
3) JDK 1.8 or later
4) Lombok 1.18.22
5) Spring Data Cassandra 3.3.1
6) Spring Boot 2.6.3
7) Spring Framework 5.3.15
8) Maven 3.8.1
9) IDE – STS 4.7.1.RELEASE (Starters: Spring Data for Apache Cassandra, Lombok)

How to make Cassandra DB ready to connect with Spring Boot?

Step#1: Download & Install Apache Cassandra

If you don’t have Cassandra installed on your local system, kindly follow a separate article on ‘How to Install Cassandra in the local System?‘.

Step#2: Start the Cassandra DB

Open a command prompt(‘cmd’) window and navigate to the bin directory of your Cassandra installation(e.g. ‘C:\apache-cassandra-3.11.12\bin’). Now type cassandra to run it. Now observe the message ‘Starting Cassandra Server’ at the start or ‘startup complete’ at the bottom lines.

C:\apache-cassandra-3.11.12\bin>cassandra

How to create Keyspace & table in Cassandra DB?

In order to work with Apache Cassandra, we have to create a keyspace and a table by following below steps.

Step#1: Create a keyspace

Open a new command prompt(cmd) and type cqlsh as below:

C:\Users\username>cqlsh

Create a keyspace by providing below command. The name of keyspace is ‘invoicedata’ in my case.

CREATE KEYSPACE invoicedata WITH REPLICATION = {'class' : 'SimpleStrategy','replication_factor' : 1};

Your keyspace is created. Now provide below command to use created keyspace.

cqlsh> use invoicedata;

you will see below line which means you are in ‘invoicedata’ keyspace.

cqlsh:invoicedata>

If you want to drop a keyspace due to any reason, provide below command:

cqlsh:invoicedata> drop keyspace mykeyspace;

Step#2: Create a table

In order to create a table named ‘invoice’, provide below command:

cqlsh:invoicedata> create table invoice(id int primary key, name text, number text, amount double);

If you want to drop a table due to any reason, provide below command:

cqlsh:invoicedata> drop table invoicedata.mytable;

In order to check the structure of created table, provide the below command:

cqlsh:invoicedata> select * from invoice;

A keyspace is the outermost container for data in Cassandra. It has a name and attributes that define its behavior. While the most common scenario is to have one keyspace per application, you could choose to segment your data into multiple keyspaces as well. Tables are located in keyspaces. A keyspace defines options that apply to all the keyspace’s tables.

By default, keyspace and table names are case-insensitive (myTable is equivalent to mytable) but case sensitivity can be forced by using double-quotes (“myTable” is different from mytable). Further, a table is always part of a keyspace and a table name can be provided fully-qualified by the keyspace it is part of. If it is not fully-qualified, the table is assumed to be in the current keyspace in the context.

What is CassandraRepository<T, ID> ?

CassandraRepository<T, ID> is an important interface which will help us to write CRUD operations easily by providing some predefined methods in it. For example, in our case we will extend our custom Repository interface from this interface. However, It is just like JpaRepositry<T, ID> that we use to write CRUD operations in case of SQL databases. Moreover, it extends CrudRepository<T, ID> interface which further extends and Repository<T, ID> interface. While using Cassandra DB with Spring Boot, we often use this interface to develop CRUD operations easily. Below is the dependency that we add in pom.xml in order to get predefined methods provided by this interface.

<dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>

Spring Boot Cassandra CRUD Examples

In this section, we will discuss about the Spring Boot Cassandra CRUD examples in detail. We will start with creating a Spring Boot Starter Project. In the following steps we will set up the configurations in order to connect Cassandra DB with our Spring Boot Application. Next, we will use CassandraRepository to create CRUD operations. Finally, we will test the outputs of created operations from the console and Cassandra DB as well. Let’s start implementing our example step by step.

Use case Details

We will consider an Invoice as a model to create a table in Cassandra DB and implement CRUD Operations accordingly.

Minimum Software Required

♠ Apache Cassandra 3.11.12
♦ JDK 8 and Above
♠ Python 2.7

Prerequisite

1) In this article, we are using Cassandra localhost installation. So, in order to implement the Spring Boot Cassandra CRUD Examples, We need to have a Cassandra DB installed in our local system. For a complete installation of Cassandra DB and its dependencies, we have a separate article on ‘How to Install Cassandra in the local System?‘. Kindly refer this article to get Apache Cassandra installed on your local system easily.

2) Before running your Spring Boot Application, start the Cassandra DB and create a keyspace & a table as described in the above sections.

Step#1 : Create a Spring Boot Project using STS(Spring Tool Suite)

Here, we will use STS(Spring Tool Suite) to create our Spring Boot Project. If you are new to Spring Boot, visit a link on How to create a sample project in spring boot using STS?. While creating a project in STS, add starters ‘Spring Data for Apache Cassandra’, and ‘Lombok’ in order to get the features of Cassandra DB. Furthermore, if you are new to ‘Lombok’, kindly visit ‘How to configure Lombok‘ and to know all about it in detail. Please note that Lombok dependency is optional. If you want to create getters/setters & constructors with the help of IDE, you can ignore to add this dependency.

Step#2 : Update application.properties

Update application.properties as below.

spring.data.cassandra.keyspace-name=invoicedata
spring.data.cassandra.contact-points=localhost
spring.data.cassandra.port=9042
spring.data.cassandra.schema-action=NONE

Here, schema-action=NONE indicates that we do not want our database to be created or recreated on startup. The rest of the attributes are used by Spring Data to connect to the correct Cassandra cluster.

Step#4 : Create a Configuration class for Cassandra DB

We need to create a @Configuration class for setting up Spring beans for Cluster and Session instances. Spring Data Cassandra provides an AbstractCassandraConfiguration base class to reduce the configuration code needed.

package com.dev.springboot.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.cassandra.config.AbstractCassandraConfiguration;

@Configuration
public class CassandraConfiguration extends AbstractCassandraConfiguration {

       @Value("${spring.data.cassandra.keyspace-name}")
       private String keySpace;

       @Value("${spring.data.cassandra.contact-points}")
       private String contactPoints;

       @Value("${spring.data.cassandra.port}")
       private int port;

       /*
        * Provide a keyspace name to the configuration.
        */
       @Override
       public String getKeyspaceName() {
           return keySpace;
       }

       @Override
       public String getContactPoints() {
           return contactPoints;
       }

       @Override
       public int getPort() {
           return port;
       }
}

Step#5 : Create a model class to map it as a table in Cassandra DB

Here we will use Invoice as a model class to illustrate the examples. Hence create an Invoice.java class as below.

package com.dev.springboot.entity;

import org.springframework.data.cassandra.core.mapping.PrimaryKey;
import org.springframework.data.cassandra.core.mapping.Table;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

@Table //represents that it will map to ‘invoice’ table in Cassandra DB.
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Invoice {

     @PrimaryKey
     private Integer id;
     private String name;
     private String number;
     private Double amount;
}

Note: We have two noticeable points here. Unlike @Entity at the model class in JPA, we have used @Table in Cassandra. Moreover, unlike @id in JPA, we have used @PrimaryKey in Cassandra.

Step#6 : Create a Repository interface

In order to support DB operations, we will create one Repository interface. As per convention, we name it InvoiceRepository.java which will extend CassandraRepository<Invoice, Integer> as below.

package com.dev.springboot.repository;
import java.util.List;
import org.springframework.data.cassandra.repository.AllowFiltering;
import org.springframework.data.cassandra.repository.CassandraRepository;
import com.dev.springboot.entity.Invoice;

public interface InvoiceRepository extends CassandraRepository<Invoice, Integer> {

//Like other Database Repositories, some commonly used methods are already provided by CassandraRepository.
//Hence, we don't need to write those here. We can write custom methods. 
//For example, below method is a custom method. 
@AllowFiltering
List<Invoice> findByName(String name);
}

@AllowFiltering: When your query needs filtering, Cassandra suggests you to add @AllowFiltering to your method. Moreover, if you don’t want to add @AllowFiltering, you have other options such as change your data model, add an index, use another table etc. You have to make the right choice for your specific use case.

Note for developing CRUD Operations: We will create a Runner class to write and test for each operation in further steps. Moreover, we will name the Runner class prefixed with the operation name to make it understand easier.

save(), saveAll(), insert(): Inserting records Example using Spring Boot & Cassandra

There are four methods provided by CassandraRepository to insert records in DB. These are save(), saveAll(), and 2 flavors of insert() to insert single record & collection of records respectively. For example, Below is the code to understand the data insertion.

package com.dev.springboot.runner;

import java.util.List;import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.dev.springboot.entity.Invoice;
import com.dev.springboot.repository.InvoiceRepository;

@Component
public class SaveOrInsertOperationRunner implements CommandLineRunner {

      @Autowired
      InvoiceRepository repo;

      @Override
      public void run(String... args) throws Exception {

          //saving one record into Cassandra DB using save() method
          Invoice inv = new Invoice(1,"Inv1","POS34523",295.74);
          repo.save(inv);

          //saving multiple records into Cassandra DB using saveAll() method
          repo.saveAll(List.of(
                   new Invoice(2,"Inv2","POS34522",292.00), 
                   new Invoice(3,"Inv3","QOS34523",293.75),
                   new Invoice(4,"Inv4","ROS34524",294.34),
                   new Invoice(5,"Inv5","SOS34525",295.95),
                   new Invoice(6,"Inv6","TOS34526",296.54),
                   new Invoice(8,"Inv4","WQS34528",247.45)
                   )
         );

         //saving one record into Cassandra DB using insert() method
         repo.insert(new Invoice(7,"Inv7","VOS34527",297.65));
      }
}

In order to know the difference between these methods, kindly visit official Spring Data document.

Output 

On querying Cassandra DB, below is the output.

save(), insert(): Updating records using Spring Boot & Cassandra

Like other popular database repositories, there is no specific methods to update records in Cassandra DB as well. save() & insert() itself facilitates us to update the records. For example, below code demonstrates the concept.

package com.dev.springboot.runner;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.dev.springboot.entity.Invoice;
import com.dev.springboot.repository.InvoiceRepository;

@Component
public class UpdateOperationRunner implements CommandLineRunner {

      @Autowired
      InvoiceRepository repo;

      @Override
      public void run(String... args) throws Exception {

          //Update Invoice Number from 'Inv1' to 'Inv01' using save() where id=1
          repo.save(new Invoice(1,"Inv01","POS34523",295.74));

          //Update Invoice Amount from '294.34' to '395.24' using insert() where id=4
          repo.insert(new Invoice(4,"Inv4","ROS34524",395.24));
      }
}

♥ Note: Because Cassandra uses an append model, there is no fundamental difference between the insert and update operations.  If you insert a row that has the same primary key as an existing row, the row is replaced. If you update a row and the primary key does not exist, Cassandra creates it.

Output

On querying Cassandra DB, below is the output.

findAll(), findById(), findByProperty(): Retrieving records using Spring Boot & Cassandra

Like other popular database repositories, there are methods to retrieve records in Cassandra DB as well. findAll() & findById() are the most popular methods to retrieve the records. Apart from that, we will create an example of findByName() where Name is  property of our model class. For example, below code demonstrates the concept.

package com.dev.springboot.runner;

import java.util.List;
import java.util.Optional;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.dev.springboot.entity.Invoice;
import com.dev.springboot.repository.InvoiceRepository;

@Component
public class FindOperationRunner implements CommandLineRunner {

      @Autowired
      InvoiceRepository repo;

      @Override
      public void run(String... args) throws Exception {

      //Retrive all records using findAll() method
      List<Invoice> invoices = repo.findAll();
      invoices.forEach(System.out::println);

      //Retrive record by Id using findById() method
      Optional<Invoice> opt= repo.findById(5);
      if(opt.isPresent()) {
           System.out.println(opt.get().getName());;
      }

      //Retrive records by invoice name using findByName() method
      List<Invoice> invoicesByName = repo.findByName("Inv4");
      invoicesByName.forEach(System.out::println);

      }
}

Output

On running the application, you can see below output on the console.

Invoice(id=5, name=Inv5, number=SOS34525, amount=295.95)
Invoice(id=1, name=Inv01, number=POS34523, amount=295.74)
Invoice(id=8, name=Inv4, number=WQS34528, amount=247.45)
Invoice(id=2, name=Inv2, number=POS34522, amount=292.0)
Invoice(id=4, name=Inv4, number=ROS34524, amount=395.24)
Invoice(id=7, name=Inv7, number=VOS34527, amount=297.65)
Invoice(id=6, name=Inv6, number=TOS34526, amount=296.54)
Invoice(id=3, name=Inv3, number=QOS34523, amount=293.75)

Inv5

Invoice(id=8, name=Inv4, number=WQS34528, amount=247.45)
Invoice(id=4, name=Inv4, number=ROS34524, amount=395.24)

deleteAll(), deleteById(): Removing records using Spring Boot & Cassandra

deleteAll() & deleteById() are the most popular methods to retrieve the records. As the name suggests, deleteAll() will delete all the records from the Cassandra DB, whereas deleteById() will delete a single record against that id. For example, below code demonstrates the concept.

package com.dev.springboot.runner;

import java.util.List;
Import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.dev.springboot.entity.Invoice;
import com.dev.springboot.repository.InvoiceRepository;

@Component
public class DeleteOperationRunner implements CommandLineRunner {

     @Autowired      
     InvoiceRepository repo;

     @Override 
     public void run(String... args) throws Exception {

         //Remove a record where id=2 using deleteById() method
         repo.deleteById(2);

         //Retrive all records using findAll() method
         List<Invoice> invoices = repo.findAll();
         invoices.forEach(System.out::println);
      }
}

Output

On running the application, the record with id=2 will get removed from the DB. Also, you can see below output on the console.

Invoice(id=5, name=Inv5, number=SOS34525, amount=295.95)
Invoice(id=1, name=Inv01, number=POS34523, amount=295.74)
Invoice(id=8, name=Inv4, number=WQS34528, amount=247.45)
Invoice(id=4, name=Inv4, number=ROS34524, amount=395.24)
Invoice(id=7, name=Inv7, number=VOS34527, amount=297.65)
Invoice(id=6, name=Inv6, number=TOS34526, amount=296.54)
Invoice(id=3, name=Inv3, number=QOS34523, amount=293.75)

What is @AllowFiltering annotation in Cassandra?

The annotation @AllowFiltering is a very new term specially for those who are going to use Cassandra for the first time. Hence, it becomes important to know about it. If you don’t apply it to your custom method in Repository interface, you may face below exception.

“com.datastax.oss.driver.api.core.servererrors.InvalidQueryException: Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING”

What happens when you write ‘Where’ clause in your Query?

When we impose a filter condition on our query (select * from invoice where name=’Inv1′), we will receive the above exception in response. Cassandra knows that it might not be able to execute the query in an efficient way. The only way Cassandra can execute this query is by retrieving all the rows from the table invoice and then by filtering out the ones which do not have the requested value for the name column.

For example, Suppose your table contains one thousand rows and 90% of them have the requested value for the name column, the query will still be relatively efficient and you should use ALLOW FILTERING.

On the other hand, suppose your table contains one thousand rows and only 4 rows contain the requested value for the name column, your query is extremely inefficient. Cassandra will load 996 rows for nothing. In this case, instead of using this query it is probably better to add an index on the name column.

Unfortunately, Cassandra has no way to differentiate between the 2 cases above as they are depending on the data distribution of the table. Therefore, Cassandra warns you and rely on you to make a good choice.

When Cassandra rejects your query just because it needs filtering, you should add @AllowFiltering to your method. You should think about your data, your model and what you are trying to do. You always have multiple options such as change your data model, add an index, use another table or use ALLOW FILTERING. You have to make the right choice for your specific use case.

How to install Cassandra DB on Windows?

In this article, we are going to learn ‘How to install Cassandra DB on Windows?’ in simple steps. The Installation process of Cassandra DB is a bit tricky as compared to other database software. Needless to mention, Cassandra is a NoSQL DB and it is known for high performance.

How to install Cassandra DB on Windows?

We will discuss the process of installing Cassandra in our local system here in this section. There are three mandatory software that we should have in our system in order to take complete benefit of the Cassandra DB. These are:

1) JDK 8+
2) Python 2.7+
3) Apache Cassandra 3+

Apart from that, we should have some special settings in order to stat Cassandra smoothly in our system. We will cover all process step by step.

How to install JDK?

If you are a Java developer, I am sure you must already have JDK installed in your system. You must have JDK 8 or higher version in your system to work with Cassandra. Even if not, you can go through the Oracle JDK download page to get it installed based on your system settings. The official document of Apache Cassandra says that, you should have JDK 8+ in your system. However, many developers tried it and suggested at many places on the web that it works only on JDK 8.

If you have set higher version than JDK 8 in your JAVA_HOME of environment variable settings, no worries. Let it be as it is. We will set JDK 8 specifically for Cassandra to run in later steps.

How to install Python?

In order to work with Apache Cassandra 3+, you should have Python 2.7+ installed in your system. The official document of Apache Cassandra says that, you should have Python 2.7+ in your system. However, many developers tried it and suggested in many places on the web that it works only on Python 2.7. Hence, for the safer side, we will install Python 2.7 only. Moreover, Python is required to run Cassandra Query shell(cqlsh) in your system.

Visit Python download page and click on the ‘Windows x86-64 MSI installer’ in case of 64 bit Operating system to download the same.

Double click on the downloaded MSI installer, follow the given instructions to install it.

How to install Cassandra?

In order to download & install Cassandra in your system, follow below steps:

Step#1: Access Apache Cassandra Download Page from your browser and download the latest stable version. Needless to say, we should always install the latest stable version rather than latest GA version in order to escape from any bug/issue. The latest stable version at this time is cassandra-3.11.12.

Step#2: Extract the downloaded file until you get the folder name as ‘apache-cassandra-3.11.12’.  Keep the folder in any directory such as ‘C:\apache-cassandra-3.11.12’ in my case.

How to make Cassandra working in your system?

Step#1: Go to Environment variable section of your system and select the ‘Path’ System variables, then click on ‘Edit’. Now, add one new entry as ‘C:\apache-cassandra-3.11.12\bin’.

Step#2: As aforementioned, we are assuming that the installed version of Cassandra will work on JDK 8 only. Instead of changing JDK version in environment variable, let’s change the configuration in Cassandra itself. If you change JDK version in environment variable just to run Cassandra, it may impact all software that are using JDK like your eclipse IDE etc. 

Go to bin folder(‘C:\apache-cassandra-3.11.12\bin’) of Cassandra, you will find ‘cassandra.bat’ file. Open it with the help of any text editor. In ‘cassandra.bat’ file, after the text ‘set UNINSTALL=”UNINSTALL”‘ add a line in order to set the JAVA_HOME as shown below which is pointing to JDK 8. For example, in my case it is ‘jdk1.8.0_251’. At the end, don’t forget to save the file.

set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_251

Step#3: At the same bin folder(‘C:\apache-cassandra-3.11.12\bin’), open a file named ‘nodetool.bat’. Now, add the same line (set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_251) at the top of the file after the comments and save it.

Step#4: Open a command prompt(‘cmd’) window and navigate to the bin directory(‘C:\apache-cassandra-3.11.12\bin’). Now type cassandra to run it. Now observe the message ‘Starting Cassandra Server’ at the start or ‘startup complete’ at the bottom lines.

While running Cassandra, you may face a warning ‘Powershell script execution unavailable’. Refer below section in order to resolve this warning and run Cassandra with fully featured functionalities.

How to enable unrestricted Powershell execution Policy?

In order to run Cassandra with fully featured functionalities, we need to unrestrict Powershell execution policy. It will look like below on running Cassandra.

Let’s follow the below steps:

Step#1: In Windows search bar, type powershell, then select ‘Windows Powershell’ and run as administrator. You will come to the location ‘C:\WINDOWS\system32’.

Step#2: Type command : Set-ExecutionPolicy Unrestricted

Step#3: Now, to check the effect of changes, type command: Get-ExecutionPolicy -List

You will see the Local Machine is Unrestricted now as shown below.

Java Technology and Beyond

Exit mobile version