top of page

Search

47 items found for ""

  • Understanding Java Record Class in 2 minutes

    Introduction Released in Java 14 as a preview, more specifically in JEP 395, Record Class is an alternative to working with Classes in Java. Record Class was a very interesting approach designed to eliminate the verbosity when you need to create a class and its components, such as: Canonical constructors Public access methods Implement the equals and hashCode methods Implement the toString method Using Record Classes it is no longer necessary to declare the items above, helping the developer to be more focused on other tasks. Let's understand better in practice. Let's create a Java class called User and add some fields and methods. Note that for a simple class with 4 fields, we create a constructor, public access methods, implement the equals and hashCode methods and finally, the toString method. It works well, but we could avoid the complexity and create less verbose code. In that case, we can use Record Classes instead User class above. User Record Class The difference between Record and a traditional Java Class is remarkable. Note that it wasn't necessary to declare the fields, create the access methods or implement any other method. In a Record Class when created, implicitly the public access methods are created, the implementations of the equals, hashCode and toString methods are also created automatically and it is not necessary to implement them explicitly. And finally, the reference fields or components are created as private final with the same names. Output Disadvantages Record Class behaves like a common Java class, but the difference is that you can't work with inheritance. You can't extends another class, only implement one or more interfaces. Another point is that it's not possible to create non-static instance variables. Final conclusion Record Classes is a great approach for anyone looking for less verbose code or who needs agility in implementing models. Despite the limitation of not being able to extends other Record Classes, it's a limitation that doesn't affect its use in general. Hope you enjoyed!

  • Getting started with Java Reflection in 2 minutes

    Introduction Java Reflection is a powerful API that allows a Java program to examine and manipulate information about its own classes at runtime. With Reflection, you can get information about a class's fields, methods, and constructors, and access and modify those elements even if they're private. In this post we're going to write some Java codes exploring some of the facilities of using Reflection and when to apply it in your projects. Bank Class We'll create a simple class called Bank, where we'll create some fields, methods and constructors to be explored using Reflection. Accessing the fields of the Bank class With the Bank class created, let's explore via Reflection the listing of all fields of the class through the getDeclaredFields method of the Class class. Note that through the static method Class.forName, we pass a string with the name of the class we want to explore via Reflection as a parameter. Output Field name: code Field type: class java.lang.Integer ************ Field name: nameOfBank Field type: class java.lang.String ************ Field name: amountOfDepositedMoney Field type: class java.lang.Double ************ Field name: totalOfCustomers Field type: class java.lang.Integer ************ Accessing the methods of the Bank class Through the getDeclaredMethods method, we can retrieve all methods of the Bank class. Output Method name: doDeposit Method type: class java.lang.String ************ Method name: doWithDraw Method type: class java.lang.String ************ Method name: getReceipt Method type: class java.lang.String ************ Creating objects With the use of Reflection to create objects, it is necessary to create them through a constructor. In this case, we must first invoke a constructor to create the object. The detail is that to retrieve this constructor, we must pay attention to the types of parameters that make up the constructor and the order in which they are declared. This makes it flexible to retrieve different constructors with different parameter numbers and type in a class. Notice below that it was necessary to create an array of type Class assigning different types according to the composition of the constructor that we will use to create our object. In this scenario, it will be necessary to invoke the method class.getConstructor(argType) passing the previously created array as an argument. This way, we will have a constructor object that will be used in the creation of our object. Finally, we create a new array of type Object assigning the values ​​that will compose our object following the order defined in the constructor and then just invoke the method constructor.newInstance(argumentsValue) passing the array as a parameter returning the object we want to create. Output Bank{code=1, nameOfBank='Bank of America', amountOfDepositedMoney=1.5, totalOfCustomers=2500} Invoking methods To invoke a method through Reflection is quite simple as shown in the code below. Note that it is necessary to pass as a parameter in the method cls.getMethod("doDeposit", argumentsType) the explicit name of the method, in this case "doDeposit" and in the second parameter, an array representing the type of data used in the parameter of the method doDeposit( double amount), in this case a parameter of type double. Finally, invoke the method method.invoke passing at the first parameter the object referencing the class, in this case an object of type Bank. And as the second parameter, the value that will be executed in the method. Output 145.85 of money has been deposited Conclusion Using Reflection is a good strategy when you need flexibility in exploring different classes and their methods without the need to instantiate objects. Normally, Reflection is used in specific components of an architecture, but it does not prevent it from being used in different scenarios. From the examples shown above, you can see infinite scenarios of its application and the advantages of its use. Hope you enjoyed!

  • Tutorial : Apache Airflow for beginners

    Intro Airflow has been one of the main orchestration tools on the market and much talked about in the Modern Data Stack world, as it is a tool capable of orchestrating data workloads through ETLs or ELTs. But in fact, Airflow is not just about that, it can be applied in several cases of day-to-day use of a Data or Software Engineer. In this Apache Airflow for Beginners Tutorial, we will introduce Airflow in the simplest way, without the need to know or create ETLs. But what is Airflow actually? Apache Airflow is a widely used workflow orchestration platform for scheduling, monitoring, and managing data pipelines. It has several components that work together to provide its functionalities. Airflow components DAG The DAG (Directed Acyclic Graph) is the main component and workflow representation in Airflow. It is composed of tasks (tasks) and dependencies between them. Tasks are defined as operators (operators), such as PythonOperator, BashOperator, SQLOperator and others. The DAG defines the task execution order and dependency relationships. Webserver The Webserver component provides a web interface for interacting with Airflow. It allows you to view, manage and monitor your workflows, tasks, DAGs and logs. The Webserver also allows user authentication and role-based access control. Scheduler The Scheduler is responsible for scheduling the execution of tasks according to the DAG definition. It periodically checks for pending tasks to run and allocates available resources to perform the tasks at the appropriate time. The Scheduler also handles crash recovery and scheduling task retries. Executor The Executor is responsible for executing the tasks defined in the DAGs. There are different types of executors available in Airflow such as LocalExecutor, CeleryExecutor, KubernetesExecutor and etc. Each executor has its own settings and execution behaviors. Metadatabase Metadatabase is a database where Airflow stores metadata about tasks, DAGs, executions, schedules, among others. It is used to track the status of tasks, record execution history, and provide information for workflow monitoring and visualization. It is possible to use several other databases to record the history such as MySQL, Postgres and among others. Workers Workers are the execution nodes in a distributed environment. They receive tasks assigned by the Scheduler and execute them. Workers can be scaled horizontally to handle larger data pipelines or to spread the workload across multiple resources. Plugins Plugins are Airflow extensions that allow you to add new features and functionality to the system. They can include new operators, hooks, sensors, connections to external systems, and more. Plugins provide a way to customize and extend Airflow's capabilities to meet the specific needs of a workflow. Operators Operators are basically the composition of a DAG. Understand an operator as a block of code with its own responsibility. Because Airflow is an orchestrator and executes a workflow, we can have different tasks to be performed, such as accessing an API, sending an email, accessing a table in a database and performing an operation, executing a Python code or even a Bash command. For each of the above tasks, we must use an operator. Next, we will discuss some of the main operators: BashOperator BashOperator allows you to run Bash commands or scripts directly on the operating system where Airflow is running. It is useful for tasks that involve running shell scripts, utilities, or any action that can be performed in the terminal. In short, when we need to open our system's terminal and execute some command to manipulate files or something related to the system itself, but within a DAG, this is the operator to be used. PythonOperator The PythonOperator allows you to run Python functions as tasks in Airflow. You can write your own custom Python functions and use the PythonOperator to call those functions as part of your workflow. DummyOperator The DummyOperator is a "dummy" task that takes no action. It is useful for creating complex dependencies and workflows without having to perform any real action. Sensor Sensors are used to wait for some external event to occur before continuing the workflow, it can work as a listener. For example, the HttpSensor, which is a type of Sensor, can validate if an external API is active, if so, the flow continues to run. It's not an HTTP operator that should return something, but a type of listener. HttpOperator Unlike a Sensor, the HttpOperator is used to perform HTTP requests such as GET, POST, PUT, DELETE end etc. In this case, it allows you to interact more fully with internal or external APIs. SqlOperator SqlOperator is the operator responsible for performing DML and DDL operations in a database, that is, from data manipulations such as SELECTS, INSERTS, UPDATES and so on. Executors Executors are responsible for executing the tasks defined in a workflow (DAG). They manage the allocation and execution of tasks at runtime, ensuring that each task runs efficiently and reliably. Airflow offers different types of executors, each with different characteristics and functionalities, allowing you to choose the most suitable one for your specific needs. Below, we’ll cover some of the top performers: LocalExecutor LocalExecutor is the default executor in Apache Airflow. It is designed to be used in development and test environments where scalability isn't a concern. LocalExecutor runs tasks on separate threads within the same Airflow process. This approach is simple and efficient for smaller pipelines or single-node runs. CeleryExecutor If you need an executor for distributed and high-scale environments, CeleryExecutor is an excellent choice. It uses Celery, a queued task library, to distribute tasks across separate execution nodes. This approach makes Airflow well-suited for running pipelines on clusters of servers, allowing you to scale horizontally on demand. KubernetesExecutor For environments that use Kubernetes as their container orchestration platform, KubernetesExecutor is a natural choice. It leverages Kubernetes' orchestration capability to run tasks in separate pods, which can result in better resource isolation and easier task execution in containers. DaskExecutor If your workflow requires parallel and distributed processing, DaskExecutor might be the right choice. It uses the Dask library to perform parallel computing on a cluster of resources. This approach is ideal for tasks that can be divided into independent sub-tasks, allowing better use of available resources. Programming language Airflow supports Python as programming language. To be honest, it's not a limiter for those who don't know the language well. In practice, the process of creating DAGs is standard, which can change according to your needs, it will deal with different types of operators, whether or not you can use Python. Hands-on Setting up the environment For this tutorial we will use Docker that will help us provision our environment without the need to install Airflow. If you don't have Docker installed, I recommend following the recommendations in this link and after installing it, come back to follow the tutorial. Downloading project To make it easier, clone the project from the following repository and follow the steps to deploy Airflow. Steps to deploy With docker installed and after downloading the project according to the previous item, access the directory where the project is located and open the terminal, run the following docker command: docker-compose up The above command will start the docker containers where the services of Airflow itself, postgres and more. If you're curious about how these services are mapped, open the project's docker-compose.yaml file and there you'll find more details. Anyway, after executing the above command and the containers already started, access the following address via browser http://localhost:8080/ A screen like below will open, just type airflow for the username and password and access the Airflow UI. Creating a DAG Creating a simple Hello World For this tutorial, we will create a simple DAG where the classic "Hello World" will be printed. In the project you downloaded, go to the /dags folder and create the following python file called hello_world.py. The code above is a simple example of a DAG written in Python. We noticed that we started import some functions, including the DAG itself, functions related to the datetime and the Python operator. Next, we create a Python function that will print to the console "Hello World" called by print_hello function. This function will be called by the DAG later on. The declaration of a DAG starts using the following syntax with DAG(..) passing some arguments like: dag_id: DAG identifier in Airflow context start_date: The defined date is only a point of reference and not necessarily the date of the beginning of the execution nor of the creation of the DAG. Usually the executions are carried out at a later date than the one defined in this parameter, and it is important when we need to calculate executions between the beginning and the one defined in the schedule_interval parameter. schedule_interval: In this parameter we define the periodicity in which the DAG will be executed. It is possible to define different forms of executions through CRON expressions or through Strings already defined as @daily, @hourly, @once, @weekly and etc. In the case of the example, the flow will run only once. catchup: This parameter controls retroactive executions, that is, if set to True, Airflow will execute the retroactive period from the date defined in start_date until the current date. In the previous example we defined it as False because there is no need for retroactive execution. After filling in the arguments, we create the hello_task within the DAG itself using the PythonOperator operator, which provides ways to execute python functions within a DAG. Note that we declared an identifier through the task_id and in the python_callable argument, which is native to the PythonOperator operator, we passed the python print_hello function created earlier. Finally, invoke the hello_task. This way, the DAG will understand that this will be the task to be performed. If you have already deployed it, the DAG will appear in Airflow in a short time to be executed as shown in the image below: After the DAG is created, activate and execute it by clicking on Trigger DAG as shown in the image above. Click on the hello_operator task (center) and then a window will open as shown in the image below: Click the Log button to see more execution details: Note how is simple it to create a DAG, just think about the different possibilities and applicability scenarios. For the next tutorials, we'll do more examples that are a bit more complex by exploring several other scenarios. Conclusion Based on the simple example shown, Airflow presented a flexible and simple approach to controlling automated flows, from creating DAGs to navigating your web component. As I mentioned at the beginning, its use is not limited only to the orchestration of ETLs, but also to the possibility of its use in tasks that require any need to control flows that have dependencies between their components within a context. scalable or not. GitHub Repository Hope you enjoyed!

  • Creating Asynchronous Java Code with Future

    Intro Java Future is one of several ways to work with the language asynchronously, providing a multi-thread context in which it is possible to execute tasks in parallel without blocking the process. In the example below, we will simulate sending a fictitious email in which, even during sending, the process will not be blocked, that is, it will not be necessary to wait for the sending to finish for the other functionalities or mechanisms to operate again. EmailService class Understanding the EmailService class The class above represents the sending emails in a fictitious way, the idea of ​​using the loop is to simulate the sending is precisely to delay the process itself. Finally, at the end of sending, the method sendEmailBatch(int numberOfEmailsToBeSent) returns a String containing a message referring to the end of the process. EmailServiceAsync class Understanding the EmailServiceAsync class The EmailServiceAsync class represents the asynchronous mechanism itself, in it we have the method sendEmailBatchAsync(int numberOfEmailsToBeSent) which will be responsible for making the process of sending dummy e-mails asynchronous. The asynchronous process is managed by using the ExecutorService instance which facilitates the management of tasks asynchronously which are assigned to a pool of threads. In this case, the call to the sendEmailBatch(int numberOfEmailsToBeSent) method boils down to a task (task) which will be assigned to a Thread defined in Executors.newFixedThreadPool(1). Finally, the method returns a Future that is literally a promise that task will be completed at some point, representing an asynchronous process. EmailServiceAsyncRun class Understanding the EmailServiceAsyncRun class It is in this class where we will test the asynchronous process using Future. Let's recap, in the EmailService class, we've created a method called sendEmailBatch(int numberOfEmailsToBeSent) in which we're simulating through the for the sending of dummy email and printing a sending message that we'll use to test the concurrency. In the EmailServiceAsync class, the sendEmailBatchAsync(int numberOfEmailsToBeSent) method creates an ExecutorService instance that will manage the tasks together with the thread pool, which in this case, we are creating just one Thread defined in Executors.newFixedThreadPool(1) and will return a Future. Now in the EmailServiceAsyncRun class, this is where we actually test the process, let's understand by parts: We instantiate an object of type EmailServiceAsync We create an object of type Future and assign it to the return of the emailAsync.sendEmailBatchAsync(500) method. The idea of ​​argument 500 is just to control the iteration of the For, delaying the process to be finished. We could even use Thread.sleep() as an alternative and set a delay time which would also work fine. Note that we are using the futureReturn.isDone() method to control the while iteration control, that is, this method allows the process not to be blocked while the email flow is executed. In this case, any process that you want to implement to compete while sending is done, can be created inside the while, such as a flow of updating customer tables or any other process. On line 20, using the futureReturn.get() method, we're printing the result of sending the emails. And finally, we finish the executorService and its tasks through the executorService.shutdown() method. Running the process Notice clearly that there are two distinct processes running, the process of sending email "Sending email Nº 498.." and the process of updating a customer table. Finally the process is finished when the message "A total of 500 emails has been sent" is printed. Working with blocking processes The use of Future is widely used for use cases where we need to block a proces. The current Thread will be blocked until the process being executed by Future ends. To do so, simply invoke the futureReturn.get() method directly without using any iteration control as used in the previous example. An important point is that this type of approach can cause resources to be wasted due to the blocking of the current Thread. Conclusion The use of Future is very promising when we need to add asynchronous processes to our code in the simplest way or even use it to block processes. It's a lean API with a certain resource limitation but that works well for some scenarios. Hope you enjoyed!

  • Accessing APIs and Extracting Data with Airflow

    Intro Airflow provides different ways of working with automated flows and one of the ways is the possibility of accessing external APIs using HTTP operators and extracting the necessary data. hands-on In this tutorial we will create a DAG which will access an external API and extract the data directly to a local file. If this is your first time using Airflow, I recommend accessing this link to understand more about Airflow and how to set up an environment. Creating the DAG For this tutorial, we will create a DAG that will trigger every 1 hour (schedule_interval="0 * * * *") and access an external API by extracting some data directly to a local JSON file. In this scenario we will use the SimpleHttpOperator operator which provides an API capable of executing requests to external APIs. Note that we use two operators within the same DAG. The SimpleHttpOperator operator that provides ways of accessing external APIs that through the method field we define HTTPs methods (GET, POST, PUT, DELETE). The endpoint field allows specifying the endpoint of the API, which in this case is products and finally, the http_conn_id parameter, where it's necessary to pass the identifier of the connection that will be defined next through the Airflow UI. As shown below, access the menu Admin > Connections Fill in the data as shown in the image below and then save. About the PythonOperator operator, we are only using it to execute a Python function called _write_response using XComs where through the task_id of the write_response task, it is possible to retrieve the result of the response and use it in any part of the code. In this scenario we are using the result retrieved from the API to write to the file. XCom is a communication mechanism between different tasks that makes Airflow very flexible. Tasks can often be executed on different machines and with the use of XComs, communication and information exchange between Tasks is possible. Finally, we define the execution of the tasks and their dependencies, see that we use the >> operator, which is basically to define the order of execution between the tasks. In our case, API access and extraction must be performed before writing to the file extract_data >> write_response. After executing the DAG, it is possible to access the file that was generated with the result of the extraction, just access one of the workers via the terminal, which in this case will only have one. Run the following command below to list the containers: docker ps A listing similar to the one below will be displayed. Notice that one of the lines in the NAMES column refers to the worker, in this case coffee_and_tips_airflow-worker_1. Continuing in the terminal, type the following command to access the Airflow directory where the extract_data.json file is located. docker exec -it coffee_and_tips_airflow-worker_1 /bin/bash It's done, now just open the file and check the content. Conclusion Once again we saw the power of Airflow for automated processes that require easy access and integration of external APIs with few lines of code. In this example, we explore the use of XComs, which aims to make the exchange of messages between tasks that can be executed on different machines in a distributed environment more flexible. Hope you enjoyed!

  • Quick guide about Apache Kafka: Powering Event-Driven architecture

    Introduction In today's data-driven world, the ability to efficiently process and analyze vast amounts of data in real-time has become a game-changer for businesses and organizations of all sizes. From e-commerce platforms and social media to financial institutions and IoT devices, the demand for handling data streams at scale is ever-increasing. This is where Apache Kafka steps in as a pivotal tool in the world of event-driven architecture. Imagine a technology that can seamlessly connect, process, and deliver data between countless systems and applications in real-time. Apache Kafka, often referred to as a distributed streaming platform, is precisely that technology. It's the unsung hero behind the scenes, enabling real-time data flow and providing a foundation for a multitude of modern data-driven applications. In this quick guide about Apache Kafka, we'll take a deep dive into Apache Kafka, unraveling its core concepts, architecture, and use cases. Whether you're new to Kafka or looking to deepen your understanding, this guide will serve as your compass on a journey through the exciting world of real-time data streaming. We'll explore the fundamental principles of Kafka, share real-world examples of its applications, and provide practical insights for setting up your own Kafka environment. So, let's embark on this adventure and discover how Apache Kafka is revolutionizing the way we handle data in the 21st century. Key Concepts of Kafka 1. Topics What Are Kafka Topics? In Kafka, a topic is a logical channel or category for data. It acts as a named conduit for records, allowing producers to write data to specific topics and consumers to read from them. Think of topics as a way to categorize and segregate data streams. For example, in an e-commerce platform, you might have topics like "OrderUpdates," "InventoryChanges," and "CustomerFeedback," each dedicated to a specific type of data. Partitioning within Topics One of the powerful features of Kafka topics is partitioning. When a topic is divided into partitions, it enhances Kafka's ability to handle large volumes of data and distribute the load across multiple brokers. Partitions are the unit of parallelism in Kafka, and they provide fault tolerance, scalability, and parallel processing capabilities. Each partition is ordered and immutable, and records within a partition are assigned a unique offset, which is a numeric identifier representing the position of a record within the partition. This offset is used by consumers to keep track of the data they have consumed, allowing them to resume from where they left off in case of failure or when processing real-time data. Data organization Topics provide a structured way to organize data. They are particularly useful when dealing with multiple data sources and data types. Topics works as a storage within Kafka context where data sent by producers is organized into topics and partitions. Publish-Subscribe Model Kafka topics implement a publish-subscribe model, where producers publish data to a topic, and consumers subscribe to topics of interest to receive the data. An analogy that we can do is when we subscribe to a newsletter to receive some news or articles. When some news is posted, you as a subscriber will receive it. Scalability Topics can be split into partitions, allowing Kafka to distribute data across multiple brokers for scalability and parallel processing. Data Retention Each topic can have its own data retention policy, defining how long data remains in the topic. This makes easier to manage the data volume wheter or not frees up space. 2. Producers In Kafka, a producer is a crucial component responsible for sending data to Kafka topics. Think of producers as information originators—applications or systems that generate and publish records to specific topics within the Kafka cluster. These records could represent anything from user events on a website to system logs or financial transactions. Producers are the source of truth for data in Kafka. They generate records and push them to designated topics for further processing. Also decide which Topic the message will be send based on the nature of the data. This ensures that data is appropriately categorized within the Kafka ecosystem. Data Type Usually producers send messages based on JSON format that makes easier the data transferring into the storage. Acknowledgment Handling Producers can handle acknowledgments from the Kafka broker, ensuring that data is successfully received and persisted. This acknowledgment mechanism contributes to data reliability. Sending data to specific partitions Producers can send messages directly to a specific partition within a Topic. 3. Consumers Consumers are important components in the Kafka context, they are responsible for consuming and providing data from the source. Basically, consumers subscribe to Kafa Topics and any data produced there will be received by consumers representing the pub/sub approach. Subscribing to Topics Consumers actively subscribe to Kafka topics, indicating their interest in specific streams of data. This subscription model enables consumers to receive relevant information aligned with their use case. Data Processing Consumers will always receive new data from topics, each consumer is responsible for processing this data according to their needs. A microservice that works as a consumer for example, it can consume data from a topic responsible for storing application logs and performing any processing before delivering it to the user or to other third-party applications. Integration between apps As mentioned previously, Kafka enables applications to easily integrate their services across varied topics and consumers. One of the most common use cases is integration between applications. In the past, applications needed to connect to different databases to access data from other applications, this created vulnerabilities and violated principles of responsibilities between applications. Technologies like Kafka make it possible to integrate different services using the pub/sub pattern where different consumers represented by applications can access the same topics and process this data in real time without the need to access third-party databases or any other data source, avoiding any security risk and added agility to the data delivery process. 4. Brokers Brokers are fundamental pieces in Kafka's architecture, they are responsible for mediating and managing the exchange of messages between producers and consumers. Brokers manage the storage of data produced by producers and guarantee reliable transmission of data within a Kafka cluster. In practice, Brokers have a transparent role within a Kafka cluster, but below I will highlight some of their responsibilities that make all the difference to the functioning of Kafka. Data reception Brokers are responsible for receiving the data, they function as an entry-point or proxy for the data produced and then manage all storage so that it can be consumed by any consumer. Fault tolerance Like all data architecture, we need to think about fault tolerance. In the context of Kafka, Brokers are responsible for ensuring that even in the event of failures, data is durable and maintains high availability. Brokers are responsible for managing the partitions within the topics capable of replicating the data, predicting any failure and reducing the possibility of data loss. Data replication As mentioned in the previous item, data replication is a way to reduce data loss in cases of failure. Data replication is done from multiple replicas of partitions stored in different Brokers, this allows that even if one Broker fails, there is data replicated in several others. Responsible for managing partitions We mentioned a recent article about partitions within topics but we did not mention who manages them. Partitions are managed by a Broker that works by coordinating reading and writing to that partition and also distributing data loading across the cluster. In short, Brokers perform orchestration work within a Kafka cluster, managing the reading and writing done by producers and consumers, ensuring that message exchanges are carried out and that there will be no loss of data in the event of failures in some of its components through data replication also managed by them. Conclusion Apache Kafka stands as a versatile and powerful solution, addressing the complex demands of modern data-driven environments. Its scalable, fault-tolerant, and real-time capabilities make it an integral part of architectures handling large-scale, dynamic data streams. Kafka has been adopted by different companies and business sectors such as Linkedin, where Kafka was developed by the way, Netflix, Uber, Airbnb, Wallmart, Goldman Sachs, Twitter and more.

  • Differences between Future and CompletableFuture

    Introduction In the realm of asynchronous and concurrent programming in Java, Future and CompletableFuture serve as essential tools for managing and executing asynchronous tasks. Both constructs offer ways to represent the result of an asynchronous computation, but they differ significantly in terms of functionality, flexibility, and ease of use. Understanding the distinctions between Future and CompletableFuture is crucial for Java developers aiming to design robust and efficient asynchronous systems. At its core, a Future represents the result of an asynchronous computation that may or may not be complete. It allows developers to submit tasks for asynchronous execution and obtain a handle to retrieve the result at a later point. While Future provides a basic mechanism for asynchronous programming, its capabilities are somewhat limited in terms of composability, exception handling, and asynchronous workflow management. On the other hand, CompletableFuture introduces a more advanced and versatile approach to asynchronous programming in Java. It extends the capabilities of Future by offering a fluent API for composing, combining, and handling asynchronous tasks with greater flexibility and control. CompletableFuture empowers developers to construct complex asynchronous workflows, handle exceptions gracefully, and coordinate the execution of multiple tasks seamlessly. In this article, we will dive deeper into the differences between Future and CompletableFuture, exploring their respective features, use cases, and best practices. By understanding the distinct advantages and trade-offs of each construct, developers can make informed decisions when designing asynchronous systems and leveraging concurrency in Java applications. Let's embark on a journey to explore the nuances of Future and CompletableFuture in the Java ecosystem. Use Cases for Future Parallel Processing: Use Future to parallelize independent tasks across multiple threads and gather results asynchronously. For example, processing multiple files concurrently. Asynchronous IO: When performing IO operations that are blocking, such as reading from a file or making network requests, you can use Future to perform these operations in separate threads and continue with other tasks while waiting for IO completion. Task Execution and Coordination: Use Future to execute tasks asynchronously and coordinate their completion. For example, in a web server, handle multiple requests concurrently using futures for each request processing. Timeout Handling: You can set timeouts for Future tasks to avoid waiting indefinitely for completion. This is useful when dealing with resources with unpredictable response times. Use Cases for CompletableFuture Async/Await Pattern: CompletableFuture supports a fluent API for chaining asynchronous operations, allowing you to express complex asynchronous workflows in a clear and concise manner, similar to the async/await pattern in other programming languages. Combining Results: Use CompletableFuture to combine the results of multiple asynchronous tasks, either by waiting for all tasks to complete (allOf) or by combining the results of two tasks (thenCombine, thenCompose). Exception Handling: CompletableFuture provides robust exception handling mechanisms, allowing you to handle exceptions thrown during asynchronous computations gracefully using methods like exceptionally or handle. Dependency Graphs: You can build complex dependency graphs of asynchronous tasks using CompletableFuture, where the completion of one task triggers the execution of another, allowing for fine-grained control over the execution flow. Non-blocking Callbacks: CompletableFuture allows you to attach callbacks that are executed upon completion of the future, enabling non-blocking handling of results or errors. Completing Future Manually: Unlike Future, you can complete a CompletableFuture manually using methods like complete, completeExceptionally, or cancel. This feature can be useful in scenarios where you want to provide a result or handle exceptional cases explicitly. Examples Creation and Completion Future code example of creation and completion. ExecutorService executor = Executors.newSingleThreadExecutor(); Future future = executor.submit(() -> { Thread.sleep(2000); return 10; }); CompletableFuture code example of creation and completion. CompletableFuture completableFuture = CompletableFuture.supplyAsync(() -> { try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } return 10; }); In CompletableFuture, supplyAsync method allows for asynchronous execution without the need for an external executor service an shown in the first example. Chaining Actions Example below in how to chain actions using Future. Future future = executor.submit(() -> 10); Future result = future.thenApply(i -> "Result: " + i); Now, an example using CompletableFuture in how to chain actions. CompletableFuture completableFuture = CompletableFuture.supplyAsync(() -> 10); CompletableFuture result = completableFuture.thenApply(i -> "Result: " + i); CompletableFuture offers a fluent API (thenApply, thenCompose, etc.) to chain actions, making it easier to express asynchronous workflows. Exception Handling Handling exception using Future Future future = executor.submit(() -> { throw new RuntimeException("Exception occurred"); }); Handling exception using CompletableFuture CompletableFuture completableFuture = CompletableFuture.supplyAsync(() -> { throw new RuntimeException("Exception occurred"); }); CompletableFuture allows for more flexible exception handling using methods like exceptionally or handle. Waiting for Completion // Future Integer result = future.get(); // CompletableFuture Integer result = completableFuture.get(); Both Future and CompletableFuture provide the get() method to wait for the completion of the computation and retrieve the result. Combining Multiple CompletableFutures CompletableFuture future1 = CompletableFuture.supplyAsync(() -> 10); CompletableFuture future2 = CompletableFuture.supplyAsync(() -> 20); CompletableFuture combinedFuture = future1.thenCombine(future2, (x, y) -> x + y); CompletableFuture provides methods like thenCombine, thenCompose, and allOf to perform combinations or compositions of multiple asynchronous tasks. Conclusion In the dynamic landscape of asynchronous and concurrent programming in Java, both Future and CompletableFuture stand as indispensable tools, offering distinct advantages and use cases. While Future provides a basic mechanism for representing the result of asynchronous computations, its capabilities are somewhat limited when it comes to composability, exception handling, and asynchronous workflow management. On the other hand, CompletableFuture emerges as a powerful and flexible alternative, extending the functionalities of Future with a fluent API for composing, combining, and handling asynchronous tasks with greater control and elegance. The choice between Future and CompletableFuture hinges on the specific requirements and complexities of the task at hand. For simple asynchronous operations or when working within the confines of existing codebases, Future may suffice. However, in scenarios that demand more sophisticated asynchronous workflows, exception handling, or task coordination, CompletableFuture offers a compelling solution with its rich feature set and intuitive API.

  • How to create a serverless app with AWS SAM

    For this post, I will teach you how to create a serverless app with AWS SAM. AWS SAM (Serverless Application Model) is an extension of AWS CloudFormation, specifically designed for serverless application development and deployment, famous serverless like AWS Lambda, API Gateway, DynamoDB, among other AWS features. Level of abstraction AWS SAM is an application-level tool primarily focused on building and deploying serverless applications on AWS. It provides higher level abstractions to facilitate the development and deployment of serverless applications, with a focus on the AWS services needed to support this type of architecture, i.e. the whole focus is on AWS and not another cloud. AWS SAM has a whole way to generate the project's code locally and makes it possible to generate tests, Build and Deploy through SAM CLI. How to install AWS SAM Go to this link and follow the steps according to each operating system. How to create a serverless project After installing, through a terminal, manage your project locally by generating the necessary files to then deploy the application. First, go to the folder where you want to generate your serverless resource and then open the terminal. Type the following command in the terminal to start the SAM: sam init After typing, a prompt will appear with some options for you to fill in your project information. Above we have 2 options to generate our initial template, let's type 1 to generate the option 1 - AWS Quick Start Templates. After typing, a new list will be shown with some template options. Note that each option boils down to a resource such as Lambda, Dynamo table and even a simple API using API Gateway. For this scenario, let's create a Dynamo table, in this case, type the option 13 and press enter. After typing, some questions will be asked, just type y to proceed until a new screen about the project information is offered as below. Type the name of the project you want and press enter. In our case I typed the following name for the project dynamo-table-using-aws-sam as in the image below. After typing the project name, the template and files containing the base code will be available and ready for deployment. Access the folder and see that a file called template.yaml has been created containing information about the resources that will be created. It's very similar to a CloudFormation template, but shorter. Open the file and notice that several helper resources have been mapped into the template, such as Dynamo itself, a Lambda and an API Gateway. Were also created, some base codes related to Lambda and some unit tests that allow local invocations. How to deploy Now that our template and base code has been generated, it's time to create the Dynamo table in AWS, just follow the next steps. Access the terminal again and type the following command: sam deploy --guided After executing this command, the following options will be shown in the terminal prompt for completion: For the Stack Name field, enter a value that will be the identifier of that stack which will be used by CloudFormation to create the necessary resources. When in doubt, follow what was typed as per the image above, in this case dynamo-stack. After filling in all the fields, a summary of what will be created will be presented as shown in the image below: Finally, one more last question will be asked about the desire to deploy, just type y to confirm. After confirming the operation, the progress of creating the resources will be displayed in the terminal until the end of the process. Deploy finished, notice again the resources that were created. Now just access the AWS console and check the table created in Dynamo. Deleting Resources If necessary, you can delete the resources via SAM CLI, just run the command below: sam delete dynamo-stack The dynamo-stack argument refers to the identifier we typed earlier in the Stack Name field, remember? Use the same to delete the entire created stack. After typing the command above, just confirm the next steps. It's quite simple how to create a serverless resource with AWS SAM, there are some advantages and disadvantages and it all depends on your strategy. Hope you enjoyed!

  • Understanding the different Amazon S3 Storage Classes

    What are Amazon S3 Storage Classes? Amazon S3 (Simple Storage Service) provides a strategic way to organize objects in different layers, where each layer has its particularities that we will detail later. The storage classes are characterized by offering different levels of durability, availability, performance and costs. For this, you must understand well which strategy to use to keep the objects aiming at the best cost benefit. Next, we'll detail each class, describing its advantages and disadvantages. S3 Standard The S3 Standard storage class is the default and most widely used option for Amazon S3. It is designed to provide high durability, availability, and performance for frequently accessed objects. Advantages S3 Standard is the most common class used in storing and accessing objects more frequently, as it is the layer that offers low latency and this allows it to be used for different use cases where dynamic access to objects is essential. Another advantage is the durability of 99.999999999%, which means that the chances of objects being corrupted or even lost is very low. As for availability, this class provides a SLA of 99.99%, which means that the objects have high availability for access. Disadvantages S3 Standard has some disadvantages compared to other classes. One of them is the high cost of storage for rarely accessed objects. That's why it's important to define lifecycle policies to deal with infrequently accessed objects. In this case, there is the S3 Standard-Infrequent Access class, which would be most appropriate for this context. We will talk about this class shortly. Another disadvantage is related to accessing newly created objects. Even though this class has low latency as one of its main characteristics. Newly created objects may not be immediately available in all regions, and it may take time for objects to become available for some regions, causing high latency S3 Intelligent-Tiering The S3 Intelligent-Tiering storage class provides a mechanism where you can automatically move objects based on usage pattern to more suitable tiers, looking for lower storage costs. Advantages The concept itself says it all about one of the advantages of using S3 Intelligent-Tiering. This class is capable of managing objects based on the usage pattern. So, for those objects that are rarely accessed, the class itself moves to more suitable classes aiming at lower storage costs. S3 Intelligent-Tiering automatically monitors and moves objects to the most suitable layers according to the usage pattern, generally this integration works for 3 types of layers. An optimized layer for frequently accessed objects, an optimized layer for rarely accessed objects, which according to AWS generates savings of up to 40%. And a last layer targeted at objects that are rarely accessed, generating storage savings of around 68%. Another point of advantage is that there's no charge for data access using S3-Intelligent-Tiering. Only charges for storage and transfer. Disadvantages Possible increase in latency for objects accessed for the first time. The reason is that when moving objects to more suitable layers, there's the possibility of increasing latency for these objects that are rarely accessed. S3 Standard-Infrequent Access (S3 Standard-IA) Suitable class for storing objects with less frequent accesses but that need to be available for quick accesses, keeping a low latency. It is a typical class for storing long-term data. Advantages The storage cost is lower compared to the S3 Standard class, maintaining the same durability characteristics. Regarding data availability, it has the same characteristics as the S3 Intelligent-Tiering class, with 99.9% SLA. Also, it allows fast access to data by offering a high throughput rate. The minimum storage fee is charged monthly, unlike classes such as S3-Standard and S3-Intelligent Tiering. Disadvantages Access data is charged per gigabyte accessed. So, depending on the frequency of access and volume accessed, it would be better to keep the data in a layer like S3 Standard. Everything will depend on your strategy. S3 One Zone-Infrequent Access (S3 One Zone-IA) Ideal storage class for objects that are accessed infrequently and will only be available in one zone (Availability Zone). AWS itself suggests this class for secondary data backup copies. Advantages The cost is lower compared to other storage classes, as the data will be stored in only one zone, making a low cost operation. Disadvantages Unlike other storage classes, where object storage is available in at least 3 availability zones (AZ). The S3 One Zone-Infrequent Access makes data available in only 1 zone, meaning that there is no redundancy. So there's a possibility of data loss if that zone fails. S3 Glacier Instant Retrieval S3 Glacier Instant Retrieval is part of the Glacier family, which features low-cost storage for accessed objects. It's an ideal storage class for archiving data that needs immediate access. Advantages Low storage costs. It has the same availability compared to S3 Intelligent-Tiering and S3 Standard-IA classes. Provides redundancy, which means that the data is replicated to at least 3 Availability Zones (AZ). Disadvantages Although it offers immediate data recovery while maintaining the same throughput as classes like S3 Standard and S3 Standard-IA, the cost becomes high when it's necessary to recover this data with a high frequency in a short period. S3 Glacier Flexible Retrieval S3 Glacier Flexible Retrieval is the old storage class called just S3 Glacier, it has a characteristic to store objects with long life duration, like any other class of the Glacier family. This class is ideal for objects that are accessed 1 to 2 times a year and that require recovery asynchronously, without immediate access. Advantages This class is ideal for keeping objects that don't require immediate recovery, making it a cost advantage. In this case, data as a backup, in which recovery is very rare, this class does not offer recovery costs due to the idea that the frequency of accessing this data is very close to zero. Disadvantages Retrieval time can be slow for some scenarios. As a feature of its own class, S3 Glacier Flexible Retrieval may fall short when immediate access to data is required. S3 Glacier Deep Archive Lowest cost storage class among the Glacier family classes. Ideal for storing data that can be accessed 1 to 2 times a year. AWS suggests using this class for scenarios where we have to keep data between 8 to 10 years in order to comply with regulations related to compliance or any other type of rules related to data retention for long periods. Advantages The lowest cost among classes in the same segment and with 99.99% availability. Available class in at least 3 Availability Zones (AZ) and ideal for data that requires long retention periods. Disadvantages Long recovery time. So, if you need a quick data retrieval, maybe this SLA may not meet expectations. Because it has a characteristic in which the data must be rarely accessed and the cost of recovery can be higher, depending on the frequency of accesses. Well that’s it, I hope you enjoyed it!

  • Creating a Spring Boot project with Intellij IDEA

    Sometimes we have to create a new project for any reason: study, work or just a test. There also a lot of tools that help us to create one, in this tutorial I will show you how to create a new spring-boot project direct from you Intellij IDE. For this tutorial I am using the last version of Intellij IDE (2023.1) Creating the project: The first step is creating a new project, can go to: File > New > Project After, that you have to select Spring Initializr and fill your project information. In this window you fill: Name: name is the name of your project Location: the local that the project will be saved Language: language of you project Type: dependency management that will help us with the dependencies Gradle Or Maven Group: name of the base packages Artifact: name of the artifactory Package name: base package that will store your classes Jdk: local java jdk that you will use Packaging: type of the package that the project will generated, for spring-boot we use Jar When you click in Next, you can choose all the dependencies of your project. If you like create a spring-boot rest api, find for dependency Spring Web like the following image. When you finish to choose all dependencies, click in Create to generate the project. In this moment you project are created and the IDE will try download all dependencies and configurations. Now you can start code ! Links: Intellij IDEA - https://www.jetbrains.com/idea/

  • Database version control with Flyway and Spring boot

    When we're working with microservices one of the goals are to have self-container applications. The database in general is one of the items that we have to handle and in a lot of cases It was managed outside of the application. One framework that allows us to version your database with migrations is Flyway (https://flywaydb.org/). Flyway help us to bring all the changes of the database for your spring-boot project throw the SQL scripts and some metadata to handle all the changes of the database. The advantage of this method is that anyone with the project will have the same state of the DB. In general, a copy of the developing or production database. In this article I'll show how to configure the Flyway in a spring-boot project. Creating the project To create the project, we'll use the official site of spring to setup a new project. First access the website: https://start.spring.io/ When the website is open, you can set the configurations like the following image: When is done, you can click on the GENERATE button to download the configured project. After that, you can import it to you IDE. I will use Intellij Idea (https://www.jetbrains.com/idea/). Understanding the Flyway If you open the file pom.xml, you will see the dependency of flyway-core like this: In the project structure you will see the folder db.migration, we will save all the SQL files inside this folder. When we start up the project one of the tasks will see if any new script was included in project, if we have a new one the project will run it on the database. To create your new script, we have to follow some pattern in the name of the file. The pattern needs to be a number that will be incremented to help flyway see the sequence of the migration execution. For this tutorial we will have to create a script like the following example and use V1, V2, V3 to increment the new files: V1__create_base_tables.sql Creating the first file Create the new file called V1_create_base_tables.sql in db.migration folder, following the script below: Configuring database To simplify your tutorial I will use the h2 database (a memory DB) to show how the flyway works it. We need to set the project with the H2 parameters. In the pom.xml file add the following dependency: And next we will set the login settings in the project, in the application.properties file add the following settings: After running, you'll see similar logs on console: 2023-04-07 14:12:29.896 INFO 8012 --- [ main] o.f.c.i.database.base.BaseDatabaseType : Database: jdbc:h2:mem:testdb (H2 2.1) 2023-04-07 14:12:30.039 INFO 8012 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 1 migration (execution time 00:00.037s) 2023-04-07 14:12:30.055 INFO 8012 --- [ main] o.f.c.i.s.JdbcTableSchemaHistory : Creating Schema History table "PUBLIC"."flyway_schema_history" ... 2023-04-07 14:12:30.132 INFO 8012 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "PUBLIC": << Empty Schema >> 2023-04-07 14:12:30.143 INFO 8012 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version "1 - create base tables" 2023-04-07 14:12:30.177 INFO 8012 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 1 migration to schema "PUBLIC", now at version v1 (execution time 00:00.057s) 2023-04-07 14:12:30.477 INFO 8012 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] When we insert a new script, like V2__new_tables.sql, the flyway will execute only the new script. Consideration: in this case we're using a memory database, when the application stops all data will be lost. When we start it again with the second script, flyway will start the database again running all the scripts. For the next posts I will cover a real database and explore some use cases. Conclusion Versioning the database from the projects give us some advantages like give to all the developers a mirror of the database. When we have any modifications, the application will handle those modifications and apply to development or production environment. References For more details you can see the official Spring documentation: https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto.data-initialization.migration-tool.flyway Creating migrations: https://flywaydb.org/documentation/tutorials/baselineMigrations H2 database http://www.h2database.com/html/tutorial.html

  • Tutorial: Kinesis Firehose Data Transformation with Terraform and Java

    Introduction AWS provides different ways to transform data through its services and one of my favorites is Kinesis Firehose Data Transformation. This is a strategy for transforming data by leveraging the stream service to deliver data to a destination. For this tutorial, we're going to use the strategy below. Kinesis Firehose will send data, and instead of writing it to the S3 bucket, it will invoke a Lambda to transform that data and then send it back to Kinesis Firehose which will deliver the same data to S3. Creating project For this post we'll use Java as language and Maven as a dependency manager. Therefore, it's necessary to generate a Maven project that will create the structure of our project. If you don't know how to generate a Maven project, I recommend seeing this post where I show how to generate it. Project structure After generating the Maven project and importing it into your IDE, we're going to create the same files and packages on the side, except for the pom.xml that was created by the maven generator. Inside the java/ folder, create a package called coffee.tips.lambda and also create a Java class called Handler.java inside this same package. Now, create a package called model inside coffee.tips then, create two java classes: Record.java Records.java Lastly, create a new package called status and also create an enum called Status. Creating Record Class Why do we need to create Record class? Kinesis Firehose expects an object as return, containing the above fields. This happens when Kinesis Firehose invokes Lambda to transform data and the same Lambda must return an object containing these filled fields. recordId This field value must contain the same id from Kinesis record ID result This field value controls the transformation status result. The possible values are: Ok: Record successfully transformed. Dropped: Record dropped intentionally according to your processing logic. ProcessingFailed: Data could not be transformed. data The transformed data payload, after data be encoded to base64. This model must contain these following parameters. Otherwise, Kinesis Firehose rejects it and sets it as data transformation failure. Creating Records Class Records class will be our response Java class containing a list of Record type. Creating Status Enum I decided to create the Enum above just to create an elegant code, but it's useful when we need to map different values for specific context. This Enum will be used in our logic code to transform data. Creating Handler Class The Handler class will be our controller for the Lambda. This Lambda will be invoked by Kinesis Firehose passing some parameters containing the data to be transformed. Note that, for the handleRequest method, a parameter called input of KinesisFirehoseEvent type contains records sent by Kinesis Firehose and the same method will return an object of Records type containing a list of records that later be sent back to Kinesis Firerose delivering to the S3. Within iteration using Java Stream, we create some conditions just to explore how the result field works. Depending on condition, we set the result value to Dropped, which means the data won't be delivered to Kinesis Firehose. Otherwise, for those that were set to Ok, the data will be sent to Kinesis Firehose. Another detail is that you can change values during execution. We set "TECH" as the value for TICKER_SYMBOL field when the SECTOR value is TECHNOLOGY. It's a way to transform data. Finally, two other methods were created just to decode and encode data as a requirement for the processing to work well. Updating pom.xml After generating our project via Maven, we need to add some dependencies and a maven plugin to package the code and libraries for deployment. Following the pom.xml content below. Creating resources with Terraform Instead creating the Kinesis Firehose, Lambda, policies and roles manually via console, we're going to create via Terraform. If you don't know much about Terraform, I recommend seeing this tutorial Getting started using Terraform om AWS. Inside terraform folder, create the following files: vars.tf content vars.tf file is where we declare the variables. Variables provides flexibility when we need to work with different resources. vars.tfvars content Now we need to set the values of these variables. So, let's create a folder called /development inside the terraform folder. After folder creation. Create a file called vars.tfvars like side image and paste the content below. Note the for bucket field, you must specify the name of your own bucket. Bucket's name must be unique. main.tf content For this file, we just declare the provider. Provider is the cloud service where we're going to use to create our resources. In this case, we're using AWS as provider and Terraform will download the necessary packages to create the resources. Note that for region field, we're using var keyword to assign the region value already declared in vars.tfvars file. s3.tf content This file is where we're declare resources related to S3. In this case, we only create S3 bucket. But, if you want to create more S3 related features like policies, roles and etc, you can declare it here. lambda.tf content The content below will be responsible for creating AWS Lambda and its roles and policies. Note that in the same file we created a resource called aws_s3_object. It's a strategy to upload the Jar file directly to S3 after packaging. Maintaining some files on S3 is a smart way when we have large files. Understanding lambda.tf content 1. We declared aws_iam_policy_document data sources that describes what actions the resources that will be assigned to these policies can perform. 2. aws_iam_role resource that provides IAM role and will control some Lambda's actions. 3. We declared aws_s3_object resource because we want to store our jar file on S3. So, during the deploy phase, Terraform will get the jar file that will be created on target folder and uploading to S3. depends_on: Terraform must create this resource before the current. bucket: It's the bucket's name where will store the jar file. key: jar's name. source: source file's location etag: triggers updates when the value changes 4. aws_lambda_function is the resource responsible to create Lambda and we need to fill some fields such as: function_name: Lambda's name. role: Lambda role declared in previous steps that provides access to AWS services and resources. handler: In this field you need to pass main class directory. source_code_hash: This field is responsible to trigger lambda updates. s3_bucket: It's the bucket's name where also will store the jar file generated during deploy. s3_key: Jar's name. runtime: Here you can set the programming language supported by Lambda. For this example, java11. timeout: Lambda's timeout of execution. 5. aws_iam_policy provides IAM policies for the resources where we define some actions to be performed. In this case, we define actions such as Lambda invocation and CloudWatch logging. 6. For aws_iam_role_policy_attachment resource, we can attach IAM policies to IAM roles. In this case, we attached lambda_iam_role and lambda_policies previously created. 7. Finally, we have aws_lambda_permission resource, we need this resource to give Kinesis Firehose permission to invoke Lambda. kinesis.tf content Understanding kinesis.tf content 1. We declared aws_kinesis_firehose_delivery_stream resource and its fields, following the details: destination: That's the destination itself, Kinesis provides a mechanism to deliver data to S3 (extended_s3), Redshift, Elasticsearch (OpenSearch service from AWS), splunk and http_endpoint. name: Kinesis Firehose name depends_on: Kinesis Firehose will be created if S3 Bucket already exists. extended_s3_configuration: 1. bucket_arn: S3 Bucket setting with arn. 2. role_arn: ARN role. 3. prefix: S3 Bucket folder where data will be stored. You can specify time format using the following expressions, "/year=! {timestamp:yyyy}/month=!{timestamp:MM}/". 4. error_output_prefix: For this field, you can define a path to store the process failure results. 5. buffer_interval: Kinesis Firehose buffer to deliver data through a specific interval. 6. buffer_size: Kinesis Firehose buffer to deliver data through a specific size. Kinesis Firehose has the both options to handle buffer. 7. compression_format: There are some compression format options like ZIP, Snappy, HADOOP_SNAPPY and GZIP. For this tutorial, we chose GZIP. processing_configuration: That's the block where we define which resource will be processed. For this case, AWS Lambda. 1. enabled: true to enable and false to disable. 2. type: Processor's type. 3. parameter_value: Lambda function name with arn. 2. We declared aws_iam_policy_document data sources that describes what actions the resources that will be assigned to these policies can perform. 3. aws_iam_role resource that provides IAM role to control some Kinesis actions. 4. aws_iam_policy provides IAM policies for the resources where we define some actions to be performed. In this case, we define S3 and some Lambda actions. 5. For aws_iam_role_policy_attachment resource, we can attach IAM policies to IAM roles. In this case, we attached firehose_iam_role and firehose_policies previously created. Packaging We've created our Maven project, Handler class with Java and Terraform files to create our resources on AWS. Now, let's run the following commands to deploy the project. First, open the terminal and be sure you're root project directory and running the following maven command: mvn package The above command will package the project creating the Jar file to be deployed and uploaded to S3. To be sure, go and check target folder and see that some files were created including lambda-kinesis-transform-1.0.jar file. Running Terraform Now, let's run some Terraform commands. Inside terraform folder, run the following commands on terminal: terraform init The above command will initiate terraform, downloading terraform libraries and also validate the terraform files. For the next command, let's run the plan command to check which resources will be created. terraform plan -var-file=development/vars.tfvars After running, you'll see similar logs on console: Finally, we can apply to create the resources through the following command: terraform apply -var-file=development/vars.tfvars After running, you must confirm to perform actions, type "yes". Now the provision has been completed! Sending messages Well, now we need to send some messages to be transformed and we're going to send them via Kinesis Firehose console. Obviously there are other ways to send it, but for this tutorial we're going to send through the easiest way. Open the Kinesis Firehose console, access the Delivery Stream option as shown in the image below. In the Test with demo data section, click to Start sending demo data button. After clicking, the messages will be sent through Kinesis Firehose and according to buffer settings, Kinesis will take 2 minutes to deliver the data or if it reaches 1 MIB of data amount. Let's take a look to our Lambda and see the metrics: Click on the Monitor tab then Metrics option and note that Lambda has been invoked and there's no errors. Transformed data results Now that we know everything is working fine, let's take a look at the transformed data directly on Amazon S3. Go and access the created S3 Bucket. Note that many files were created. Let's read one of them and see the transformed data. Choose a file like as in the image below and click on the Actions button and then on the Query with S3 Select option. Following the selected options in the image below, click on Run SQL query button to see the result. Based on above image you can see that according to Handler.java which we defined an algorithm to drop data with CHANGE field value less than zero and for those with SECTOR field value equals TECHNOLOGY we would set TICKER_SYMBOL field value to TECH. This was an example of how you can transform data using Kinesis Firehose Data Transformation and Lambda as an inexpensive component to transform data. Stop Sending messages You can stop sending messages before destroying the created resources via Terraform looking to save money. So, just go back to the Kinesis Firehose console and click on Stop sending demo data button. Destroy AWS Billing charges will happen if you don't destroy these resources. So I recommend destroying them by avoiding some unnecessary charges. To avoid it, run the command below. terraform destroy -var-file=development/vars.tfvars Remember you need to confirm this operation, cool? Conclusion Kinesis Firehose definitely isn't just a service to deliver data. There's flexibility integrating AWS services and the possibility to deliver data to different destinations making data transformation and applying logic according to your use case. Github repository Books to study and read If you want to learn more about and reach a high level of knowledge, I strongly recommend reading the following book(s): AWS Cookbook is a practical guide containing 70 familiar recipes about AWS resources and how to solve different challenges. It's a well-written, easy-to-understand book covering key AWS services through practical examples. AWS or Amazon Web Services is the most widely used cloud service in the world today, if you want to understand more about the subject to be well positioned in the market, I strongly recommend the study. Setup recommendations If you have interesting to know what's my setup I've used to develop my tutorials, following: Notebook Dell Inspiron 15 15.6 Monitor LG Ultrawide 29WL500-29 Well that’s it, I hope you enjoyed it!

bottom of page