Search
49 items found for ""
- How Artificial Intelligence Can Help Data Engineers Build Data Pipelines
Building and maintaining data pipelines is a critical task for data engineers, but it can be time-consuming and prone to human error. With the help of artificial intelligence (AI), this process can be accelerated, errors reduced, and efficiency increased. In this article, we’ll explore how AI is transforming data pipeline automation, providing practical examples of prompts for engineers. How Artificial Intelligence can help Data Engineers in Automating Data Pipelines in their daily lives How Artificial Intelligence can help Data Engineers in Automating Data Pipelines in their daily lives Automating data pipelines with AI encompasses multiple steps, including data collection, transformation, validation, and loading. Some of the main applications of AI include: Automated code creation: AI can generate SQL, Python, or Scala scripts based on simple textual descriptions. Fault identification: AI-powered tools can detect and suggest fixes for performance bottlenecks or inconsistencies. Resource optimization: Infrastructure configurations can be automatically adjusted to improve efficiency and reduce costs. Intelligent monitoring: AI algorithms can predict faults and anomalies before they cause significant problems. Technical documentation: AI can create detailed and organized documentation for complex pipelines. Using AI to automate data pipelines not only makes engineers’ jobs easier, but also helps companies scale their solutions faster and with better quality. Specific Areas Where AI Can Help Pipeline Planning and Modeling During planning, AI can suggest optimal architectures for a pipeline based on data volume, frequency of updates, and required integrations. Example prompt: "Design a pipeline architecture that processes 1 TB of data daily, integrating data from MySQL, applying transformations in Spark, and loading into Redshift." Expected result: A suggested architecture with the following components: MySQL as source: Use a connector like Debezium or AWS Database Migration Service (DMS) to capture incremental changes (CDC) to avoid extracting large, repeated volumes on a daily basis. Alternatively, use a full extract for smaller reference tables and incremental for transactional tables. Spark for distributed processing: AWS EMR or Databricks can run the transformation Spark jobs. Split Spark jobs into: Cleaning Jobs: Normalization, handling null values, formatting fields, etc. Transformation Jobs: Application of business rules, aggregations and joins. Use PySpark or Scala for deployments and adopt a DAG (Directed Acyclic Graph)-based model to orchestrate dependencies. Intelligent Partitioning: Data should be partitioned strategically to speed up loads into Redshift (e.g., partition by date). Redshift for storage and query: Data transformed by Spark is written directly to Redshift using: COPY Command: Bulk upload optimized files (Parquet or compressed CSV) from S3 to Redshift. Staging Tables: Load data into temporary tables and then execute SQL commands to merge with final tables. Enable SortKey and DistKey in Redshift to optimize subsequent queries. Task-Specific Code Generation AI can generate code snippets for common tasks like data transformation and API integration. Example prompt: "Create a Python script that extracts data from a REST API, transforms the JSON into a DataFrame, and saves the results to an S3 bucket." import requests import pandas as pd import boto3 # Data Extraction url = "https://api.example.com/data" response = requests.get(url) data = response.json() # DataFrame transformation df = pd.DataFrame(data) # Saving on S3 s3 = boto3.client('s3') df.to_csv('/tmp/data.csv', index=False) s3.upload_file('/tmp/data.csv', 'meu-bucket', 'data/data.csv') Data Validation and Quality AI can suggest automated checks to validate data consistency and quality, including detecting outliers and missing values. Example prompt: "Create a Python script to check a dataset for duplicates and identify columns with missing values." # Checking duplicates duplicates = df[df.duplicated()] if not duplicates.empty: print(f"Há {len(duplicates)} duplicatas no dataset.") # Checking missing values missing = df.isnull().sum() print("Colunas com valores ausentes:") print(missing[missing > 0]) Query Optimization and Processing AI can review SQL queries and suggest improvements to optimize performance, such as indexes or schema changes. Example prompt: "Analyze this SQL query and suggest how to reduce its execution time." Example query: SELECT * FROM sales WHERE sales_date > '2024-01-01' ORDER BY total DESC; AI Suggestion Add an index on the sales_date column. Replace SELECT * for specific columns. Consider partitioning the table by sales_date. Automated Monitoring and Alerts AI algorithms can configure pipelines to automatically issue alerts if they detect failures, delays, or anomalies. Example prompt: "Create an alert in Python to monitor the execution time of a pipeline and send a notification via Slack if it exceeds 30 minutes." import time from slack_sdk import WebClient start_time = time.time() # Pipeline execution # (pipeline_code) # Checking execution time execution_time = time.time() - start_time if execution_time > 1800: client = WebClient(token='your-slack-token') client.chat_postMessage( channel='#alerts', text=f"Pipeline demorou {execution_time/60:.2f} minutos para executar." ) Documentation and Reports AI can create detailed documentation of each step of the pipeline, including flows, configurations, and expected outcomes. Example prompt: "Document a pipeline that consumes data from a legacy system, transforms it in Airflow, and stores it in a Snowflake cluster." Expected output: Documentation created in Markdown format ### Data Pipeline: Legacy System to Snowflake #### General Description Pipeline created to consume data from a legacy system, transform it and store it in Snowflake. #### Main Components: 1. **Extraction**: - Tool: Custom connector via Python. - Frequency: Daily. 2. **Transformation**: - Framework: Apache Airflow. - Operations: Normalization of columns and filters based on business rules. 3. **Storage**: - Destination: Snowflake. - Tables: `processed_data`. #### Monitoring - Email alerts for failures. - Logs stored in CloudWatch. Benefits of AI-Powered Data Pipeline Automation Efficiency : Reduces the time required to develop and maintain pipelines. Consistency : Ensures that processes follow high-quality standards. Flexibility : Allows for rapid adaptation to changing requirements. Economy: Reduces operational costs by optimizing resources. Conclusion In this post, we showed how Artificial Intelligence can help Data Engineers in automating data pipelines and how it can bring more efficiency and practicality to their daily work. With practical examples and accessible tools, it is possible to make the most of this technology and create more reliable and scalable pipelines. The examples above may seem simple, but they are just ways to show the infinite possibilities that AI has to offer us.
- Top Programming Languages in 2025: A Complete and Updated Overview!
Top Programming Languages in 2025 The top programming languages are constantly evolving, and it’s crucial to stay up to date on what’s trending and what’s most relevant for the future. Some significant changes are expected in 2025, and it’s important to understand the big picture. In this article, we’ll provide a comprehensive and up-to-date overview of the top programming languages in 2025. We’ll explore which languages are gaining traction, which are maintaining their relevance, and which are losing traction. By understanding market trends and developer needs, you’ll be able to make informed decisions about which languages to invest in and master. We’ll also cover the strengths and weaknesses of each language, as well as career opportunities and industries that use them most frequently. Be prepared for the future of programming with this comprehensive and up-to-date overview of the top programming languages in 2025. Introduction to the main programming languages in 2025 In 2025, the programming world will continue to expand, reflecting technological changes and market demands. Programming languages are the foundation of software, application, and system development, and their importance cannot be understated. As new technologies emerge, some languages will rise to prominence while others will struggle to maintain their relevance. Understanding which languages are on the rise and which are fading is crucial for any developer looking to stay competitive. The programming landscape is dynamic and ever-changing. With the rise of automation, artificial intelligence, and mobile app development, certain languages have become indispensable. Furthermore, the popularity of a language can vary by region, industry, and developer preferences. Therefore, it is crucial to be aware of the global and regional trends that are shaping the future of programming. In this article, we will take an in-depth look at the top programming languages of 2025. We will look not only at the most popular languages, but also at the trends that are shaping their use and evolution. In doing so, we hope to provide a comprehensive overview that helps developers, students, and professionals make informed decisions about their programming path. Popular programming languages today There are currently several programming languages that dominate the market, each with its own unique characteristics and application areas. Python, JavaScript, Java, C++, Ruby, and C are among the most widely used by developers worldwide. Each of these languages has an active community and a wide range of libraries and frameworks that facilitate development. This contributes to their being chosen for a variety of projects, from web applications to artificial intelligence systems. Python, for example, continues to be a popular choice due to its simplicity and versatility. It is widely used in data science, machine learning, and automation, making it an essential tool for developers and analysts. JavaScript, on the other hand, is the backbone of web development, allowing the creation of interactive and dynamic interfaces. With the rise of frameworks such as React and Angular, JavaScript has solidified its position as one of the most sought-after languages. Java and C++ also maintain their relevance, especially in sectors such as enterprise software development and embedded systems. Java is known for its portability and robustness, while C++ is valued for its performance and control over system resources. Ruby and C have their own loyal fan bases, each offering features that make them ideal for web and application development, respectively. The choice of language can depend on factors such as the type of project, the development team, and the specific needs of the client. Programming language trends for the future As we move towards 2025, there are a few trends that can be observed in the use of programming languages. One of the main trends is the increasing demand for languages that support artificial intelligence and machine learning. Python stands out in this scenario, but other languages such as R and Julia are also gaining popularity due to their ability to manipulate large volumes of data and perform complex analyses. Another important trend is the increasing adoption of programming languages that facilitate rapid and efficient development. With the need to bring products to market quickly, there is increasing pressure to use languages that allow for rapid prototyping and continuous iteration. This has led to an increase in the use of languages such as JavaScript and Ruby, which have robust frameworks that speed up the development process. Additionally, functional programming is becoming more prevalent, influencing languages such as JavaScript and Python. Functional programming offers a way to write cleaner, less error-prone code, which is especially valuable in large-scale projects. The rise of microservices-oriented architectures is also driving the use of languages that support this paradigm, with a focus on scalability and maintainability. As the technology landscape continues to evolve, it’s vital that developers stay informed about these trends in order to adapt and thrive. Python Python continues to be one of the most popular programming languages in 2025, cementing its position as the language of choice for many developers. Its simplicity and readability make it accessible to beginners, while its powerful libraries and frameworks make it a preferred choice for advanced applications. The Python community is extremely active, contributing a wide range of resources that make learning and development easier. One of the areas where Python shines is in data science and machine learning. Libraries such as Pandas, NumPy, and TensorFlow provide robust tools for data analysis and building predictive models. With the growing importance of data analysis in various industries, the demand for developers who are proficient in Python is on the rise. In addition, Python is frequently used in task automation, devops, and web development, further increasing its practical applications in the market. However, Python is not without its challenges. Although it is a high-level language with a clear syntax, its performance can be inferior when compared to languages such as C++ or Java in applications that require high performance. Additionally, managing dependencies and virtual environments can be tricky for new users. Despite this, widespread adoption and continued community support ensure that Python will remain a relevant and growing language for years to come. JavaScript JavaScript is undoubtedly one of the most influential languages in the world of programming, especially in web development. In 2025, its relevance remains strong, with a vibrant community and a plethora of tools and libraries that are transforming the way developers build applications. With the growing demand for rich and interactive user experiences, JavaScript has become a central part of any web development project. The evolution of JavaScript has been fueled by the emergence of frameworks such as React, Angular, and Vue.js, which have improved development efficiency and enabled the creation of single-page applications (SPAs) with exceptional performance. These frameworks help to structure code in a more organized way and make it easier to maintain large projects. In addition, the popularity of Node.js has allowed developers to use JavaScript on both the front-end and back-end, creating a unified development experience. However, the JavaScript ecosystem also faces some challenges. The rapid evolution of libraries and frameworks can be overwhelming for new developers, who may feel lost in the midst of so many options. Additionally, cross-browser compatibility issues and the need for performance optimization are ongoing concerns. Despite these challenges, JavaScript’s flexibility and ubiquity ensure that it remains one of the most important and sought-after languages in the job market. Java Java continues to be one of the most trusted and widely used programming languages in 2025. Known for its portability and robustness, Java is a popular choice for developing enterprise applications, backend systems, and Android apps. Its “write once, run anywhere” philosophy appeals to companies looking for scalable, long-term solutions. One of Java’s key features is its strong typing and object orientation, which help create more structured and maintainable code. In addition, its vast ecosystem of libraries and frameworks, such as Spring and Hibernate, promotes more agile and efficient development. Java is also a popular choice in high-demand environments, such as banking and financial institutions, where security and reliability are paramount. However, Java is not without its drawbacks. The verbosity of the language can be a hindrance for new developers, who may find the syntax more complex compared to languages like Python or JavaScript. Additionally, with the rise in popularity of lighter, microservice-oriented languages such as Go and Node.js, Java has faced some competition. However, its solid reputation and continued evolution through updates and new versions ensure that Java will continue to be a relevant choice for developers in 2025. C++ C++ is a language that remains relevant in 2025, especially in areas that require control over system resources and high performance. Commonly used in the development of embedded systems, games, and applications that require intensive processing, C++ continues to be a favorite choice for developers who need efficiency and speed. The language allows for low-level programming, which is crucial for applications that require direct interaction with the hardware. One of the advantages of C++ is its memory manipulation capabilities, which provide superior performance compared to many other languages. In addition, C++’s object-oriented programming allows for the creation of modular and reusable code, making it easier to maintain and develop complex systems. The language also has a strong community and user base that continues to contribute new libraries and tools. However, C++ presents significant challenges. The complexity of the language can be intimidating for beginners, and manual memory management can lead to difficult-to-debug errors. In addition, competition from languages like Rust, which offer memory safety and simplicity, is beginning to challenge C++’s position in some areas. Despite these challenges, the demand for proficient C++ developers continues to be strong, especially in industries where performance is critical. Ruby Ruby, while not as popular as some other languages, maintains a loyal user base and a niche in web development. As of 2025, Ruby continues to be the language of choice for many developers working with the Ruby on Rails framework, a powerful tool for rapid web application development. Ruby’s “convention over configuration” philosophy simplifies the coding process, making it attractive to startups and agile projects. The elegance and readability of Ruby code are often cited as some of its greatest strengths. The language encourages good coding practices and allows developers to write clear, concise code. Additionally, the Ruby community is known for its camaraderie and support, offering numerous resources, gems, and tutorials to help new users get started with the language. However, Ruby does face challenges when it comes to performance. Compared to languages like Java or C++, Ruby can be slower, which can be a disadvantage in performance-intensive applications. Additionally, Ruby's popularity has waned in some areas, with developers opting for other languages that offer better performance or more support for new technologies. Despite this, Ruby is still an excellent choice for web development, especially for those looking for an easy-to-learn language with a supportive community. C# C# is a programming language developed by Microsoft that continues to gain prominence in 2025, especially in application development for the .NET platform. C# is widely used in game development, desktop applications, and enterprise solutions, making it a versatile choice for developers. The language combines the robustness of C++ with the ease of use of languages such as Java, providing a balance between performance and productivity. One of the main advantages of C# is its integration with the Microsoft ecosystem, which makes it easier to build applications that use technologies such as Azure and Windows. In addition, the language has a rich set of libraries and frameworks that accelerate development and allow the creation of modern and scalable applications. The introduction of .NET Core has also expanded the usability of C#, allowing developers to create cross-platform applications. However, C# is not without its challenges. Its dependence on the Microsoft platform may be seen as a limitation by some developers, especially those who prefer open-source solutions. In addition, the market can be more competitive, with many companies looking for developers with experience in popular languages such as JavaScript or Python. Despite these obstacles, the growing adoption of C# in sectors such as gaming and enterprise development ensures that the language remains a viable and relevant choice. Conclusion: Choosing the Right Programming Language for the Future Choosing the right programming language for the future is a crucial decision for developers and technology professionals. In 2025, a variety of languages will continue to emerge, each with its own unique features, advantages, and disadvantages. Understanding these nuances is essential to making informed decisions about your learning and development choices. When considering the future, it’s important to consider not only the popularity of a language, but also its applications and market demand. Languages like Python and JavaScript are becoming increasingly essential, especially in areas involving data science and web development. However, languages like Java, C++, and C# also remain relevant in specific industries that require performance and security. Finally, the most important thing is to be willing to learn and adapt. The world of programming is constantly evolving, and new languages and technologies emerge regularly. The ability to learn new languages and adapt to different development environments will be a key differentiator in the future. So choose a language that not only meets your current needs, but also opens doors to new opportunities and challenges as you advance in your programming career. And you, are you ready to develop your skills and stand out in 2025?
- Getting started using Terraform on AWS
Terraform is a IaC (Infrastructure as Code) tool that makes it possible to provision infrastructure in cloud services. Instead of manually creating resources in the cloud, Terraform facilitates the creation and control of these services through management of state in a few lines of code. Terraform has its own language and can be used independently with other languages isolating business layer from infrastructure layer. For this tutorial, we will create an S3 Bucket and an SQS through Terraform on AWS Terraform Installation For installation, download the installer from this link according to your operating system. AWS Provider We'll use AWS as a provider. Thus, when we select AWS as a provider, Terraform will download the packages that will enable the creation of specific resources for AWS. To follow the next steps, we hope you already know about: AWS Credentials Your user already has the necessary permissions to create resources on AWS Authentication As we are using AWS as provider, we need to configure Terraform to authenticate and then create the resources. There are a few ways to authenticate. For this tutorial, I chose to use one of the AWS mechanisms that allows you to allocate credentials in a file in the $HOME/.aws folder and use it as a single authentication source. To create this folder with the credentials, we need to install the AWS CLI, access this link and follow the installation steps. This mechanism avoids using credentials directly in the code, so if you need to run a command or SDK that connects to AWS locally, these credentials will be loaded from this file. Credentials settings After installing the AWS CLI, open the terminal and run the following command: aws configure In the terminal itself, fill in the fields using your user's credentials data: After filling in, 2 text files will be created in the $HOME/.aws directory config : containing the profile, in this case the default profile was created credentials : containing own credentials Let's change the files to suit this tutorial, change the config file as below: [profile staging] output = json region = us-east-1 [default] output = json region = us-east-1 In this case, we have 2 profiles configured, the default and staging profile. Change the credentials file as below, replacing it with your credentials. [staging] aws_access_key_id = [Access key ID] aws_secret_access_key = [Secret access key] [default] aws_access_key_id = [Access key ID] aws_secret_access_key = [Secret access key] Creating Terraform files After all these configurations, we will actually start working with Terraform. For this we need to create some base files that will help us create resources on AWS. 1º Step: In the root directory of your project, create a folder called terraform/ 2º Step: Inside the terraform/ folder, create the files: main.tf vars.tf 3º Step: Create another folder called staging inside terraform/ 4º Step: Inside the terraform/staging/ folder create the file: vars.tfvars Okay, now we have the folder structure that we will use for the next steps. Setting up Terraform files Let's start by declaring the variables using the vars.tf file. vars.tf In this file is where we're going to create the variables to be used on resources and bring a better flexibility to our code. We can create variables with a default value or simply empty, where they will be filled in according to the execution environment, which will be explained later. variable "region" { default = "us-east-1" type = "string" } variable "environment" { } We create two variables: region : Variable of type string and its default value is the AWS region in which we are going to create the resources, in this case, us-east-1. environment : Variable that will represent the execution environment staging/vars.tfvars In this file we are defining the value of the environment variable previously created with no default value. environment = "staging" This strategy is useful when we have more than one environment, for example, if we had a production environment, we could have created another vars.tfvars file in a folder called production/ . Now, we can choose in which environment we will run Terraform. We'll understand this part when we run it later. main.tf Here is where we'll declare resources such as S3 bucket and SQS to be created on AWS. Let's understand the file in parts. In this first part we're declaring AWS as a provider and setting the region using the variable already created through interpolation ${..} . Provider provider "aws" { region = "${ var . region }" } Creating S3 Bucket To create a resource via Terraform, we always start with the resource keyword, then the resource name, and finally an identifier. resource "name_resource" "identifier" {} In this snippet we're creating a S3 Bucket called bucket.blog.data , remember that Bucket names must be unique. The acl field defines the Bucket restrictions, in this case, private. The tags field is used to provide extra information to the resource, in this case it will be provide the value of the environment variable. resource "aws_s3_bucket" "s3_bucket" { bucket = "bucket.blog.data" acl = "private" tags = { Environment = "${ var . environment }" } } Creating SQS For now, we'll create an SQS called sqs-posts . Note that resource creation follows the same rules as we described earlier. For this scenario, we set the delay_seconds field that define the delay time for a message to be delivered. More details here . resource "aws_sqs_queue" "sqs-blog" { name = "sqs-posts" delay_seconds = 90 tags = { Environment = "${ var . environment }" } } Running Terraform 1º Step : Initialize Terraform Open the terminal and inside terraform/ directory, run the command: terraform init Console message after running the command. 2º Step: In Terraform you can create workspaces . These are runtime environments that Terraform provides and bringing flexibility when it's necessary to run in more than one environment. Once initialized, a default workspace is created. Try to run the command below and see which workspace you're running. terraform workspace list For this tutorial we will simulate a development environment. Remember we created a folder called /staging ? Let's getting start using this folder as a development environment. For that, let's create a workspace in Terraform called staging as well. If we had a production environment, a production workspace also could be created. terraform workspace new "staging" Done, a new workspace called staging was created! 3º Step: In this step, we're going to list all existing resources or those that will be created, in this case, the last option. terraform plan -var-file=staging/vars.tfvars The plan parameter makes it possible to visualize the resources that will be created or updated, it is a good option to understand the behavior before the resource is definitively created. The second -var-file parameter makes it possible to choose a specific path containing the values of the variables that will be used according to the execution environment. In this case, the /staging/vars.tfvars file containing values related to the staging environment. If there was a production workspace, the execution would be the same, but for a different folder, got it? Messages console after running the last command using plan parameter. Looking at the console, note that resources earlier declared will be created: aws_s3_bucket.s3_bucket aws_sqs_queue.sqs-blog 4º Step: In this step, we are going to definitely create the resources. terraform apply -var-file=staging/vars.tfvars Just replace plan parameter with apply , then a confirmation message will be shown in the console: To confirm the resources creation, just type yes . That's it, S3 Bucket and SQS were created! Now you can check it right in the AWS console. Select workspace If you need to change workspace, run the command below selecting the workspace you want to use: terraform workspace select "[workspace]" Destroying resources This part of the tutorial requires a lot of attention. The next command makes it possible to remove all the resources that were created without having to remove them one by one and avoiding unexpected surprises with AWS billing. terraform destroy -var-file=staging/vars.tfvars Type yes if you want to delete all created resources. I don't recommend using this command in a production environment, but for this tutorial it's useful, Thus, don't forget to delete and AWS won't charge you in the future. Conclusion Terraform makes it possible to create infrastructure very simply through a decoupled code. For this tutorial we use AWS as a provider, but it is possible to use Google Cloud, Azure and other cloud services. Books to study and read If you want to learn more about and reach a high level of knowledge, I strongly recommend reading the following book(s): Terraform: Up & Running: Writing Infrastructure as Code is a book focused on how to use Terraform and its benefits. The author make comparisons with several other IaC (Infrastructure as code) tools such as Ansible and Cloudformation (IaC native to AWS) and especially how to create and provision different resources for multiple cloud services. Currently, Terraform is the most used tool in software projects for creating and managing resources in cloud services such as AWS, Azure, Google Cloud and many others. If you want to be a complete engineer or work in the Devops area, I strongly recommend learning about the topic. AWS Cookbook is a practical guide containing 70 familiar recipes about AWS resources and how to solve different challenges. It's a well-written, easy-to-understand book covering key AWS services through practical examples. AWS or Amazon Web Services is the most widely used cloud service in the world today, if you want to understand more about the subject to be well positioned in the market, I strongly recommend the study. Well that’s it, I hope you enjoyed it!
- 5 Basic Apache Spark Commands for Beginners
If you've heard about Apache Spark but have no idea what it is or how it works, you're in the right place. In this post, I'll explain in simple terms what Apache Spark is, show how it can be used, and include practical examples of basic commands to help you start your journey into the world of large-scale data processing. What is Apache Spark? Apache Spark is a distributed computing platform designed to process large volumes of data quickly and efficiently. It enables you to split large datasets into smaller parts and process them in parallel across multiple computers (or nodes). This makes Spark a popular choice for tasks such as: Large-scale data processing. Real-time data analytics. Training machine learning models. Built with a focus on speed and ease of use, Spark supports multiple programming languages, including Python , Java , Scala , and R . Why is Spark so popular? Speed : Spark is much faster than other solutions like Hadoop MapReduce because it uses in-memory processing. Flexibility : It supports various tools like Spark SQL, MLlib (machine learning), GraphX (graph analysis), and Structured Streaming (real-time processing). Scalability : It can handle small local datasets or massive volumes in clusters with thousands of nodes. Getting Started with Apache Spark Before running commands in Spark, you need to understand the concept of RDDs ( Resilient Distributed Datasets ), which are collections of data distributed across different nodes in the cluster. Additionally, Spark works with DataFrames and Datasets, which are more modern and optimized data structures. How to Install Spark Apache Spark can run locally on your computer or on cloud clusters. For a quick setup, you can use PySpark, Spark's Python interface: pip install pyspark Basic Commands in Apache Spark Here are some practical examples to get started: 1. Creating a SparkSession Before anything else, you need to start a Spark session: from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("SparkExample") \ .getOrCreate() 2. Reading a File Let’s load a CSV file into a DataFrame: df = spark.read.csv("data.csv", header=True, inferSchema=True) df.show() 3. Selecting and Filtering Data You can select specific columns or apply filters: df.select ("name", "age").show() df.filter(df["age"] > 30).show() 4. Transforming Data Use functions like groupBy and agg to transform data: df.groupBy("city").count().show() 5. Saving Results Results can be saved to a file: df.write.csv("result.csv", header=True) Conclusion Apache Spark is a powerful tool that makes large-scale data processing accessible, fast, and efficient. Whether you're starting in data or looking to learn more about distributed computing, Spark is an excellent place to begin. Are you ready to dive deeper into the world of Apache Spark? Check out more posts about Apache Spark by accessing the links below: How to read CSV file with Apache Spark
- Data Mesh: Does It Still Make Sense to Adopt?
Introduction Data Mesh: Does it still make sense to adopt? As companies grow, the volumes of data that need to be processed, stored, and analyzed increase exponentially. Traditional data architectures, centralized in a single repository or team, have started to show inefficiencies. Centralized models, such as the well-known Data Warehouses and Data Lakes, often encounter bottlenecks, limited scalability, and difficulties in meeting the growing demand for data across multiple business areas. In this context, Data Mesh emerges as an innovative approach, proposing the decentralization of data operations and governance, distributing responsibility to domains oriented around data products. Each domain, or business area, becomes responsible for creating, maintaining, and using its own data as a complete product, meeting both quality and consumption requirements. With Data Mesh, companies can more efficiently handle data growth, allowing different functional areas to take ownership of the data they generate and consume. Decentralized management offers scalability, autonomy, and faster delivery of valuable insights, addressing many challenges found in traditional centralized architectures. This approach is rapidly gaining relevance in the field of Big Data, especially in organizations that need to adapt to a fast-evolving data ecosystem. Data Mesh is not just a new architecture but also a cultural shift in how data is managed and valued within companies. But What Is Data Mesh, After All? Data Mesh is a modern approach to data architecture that seeks to solve the challenges of centralized architectures by proposing a decentralization of both data processing and governance. The central idea of Data Mesh is to treat data as a product, where each domain within the organization is responsible for managing and delivering its own data autonomously, similar to how they manage other products or services. This concept was developed to address the issues that arise in centralized architectures as data volume, complexity, and diversity grow. Instead of relying on a central data team to manage and process all information, Data Mesh distributes responsibility to cross-functional teams. This means that each team, or domain, becomes the "owner" of their data, ensuring it is reliable, accessible, and of high quality. Data Mesh is supported by several essential pillars that shape its unique approach. First, it decentralizes data management by delegating responsibility to the domains within an organization. Each domain is responsible for its own data, allowing business teams to independently manage the data they produce and use. Additionally, one of the key concepts of Data Mesh is treating data as a product. This means that data is no longer seen merely as a byproduct of business processes but rather as valuable assets, with teams responsible for ensuring that it is reliable, accessible, and useful to consumers. For this to work, a robust architecture is essential, providing teams with the necessary tools to efficiently manage, access, and share data autonomously, without depending on a centralized team. This infrastructure supports the creation and maintenance of data pipelines and the monitoring of data quality. Finally, federated governance ensures that, despite decentralization, there are rules and standards that all teams follow, ensuring compliance and data interoperability across different domains. The Lack of Autonomy in Accessing Data One of the biggest challenges faced by business areas in many organizations is their dependence on centralized data teams to obtain the information needed for strategic decisions. Teams in marketing, sales, operations, and other departments constantly need data to guide campaigns, improve processes, and optimize operations. However, access to this data is often restricted to a central data or IT team, leading to various bottlenecks. This lack of autonomy directly impacts the agility of business areas. Each new data request must be formally submitted to the data team, which is already overwhelmed with other demands. The result? Long waiting times for analyses, reports, and insights that should be generated quickly. Often, decisions must be made based on outdated or incomplete data, harming the company's competitiveness and ability to adapt to new opportunities. Another critical issue is the lack of visibility. Business areas often struggle to track what is available in the data catalog, where to find relevant data, and even understand the quality of that information. The alignment between business requirements and data delivery becomes strained, creating a gap between what the business needs and what the data team can provide. Additionally, centralizing data in an exclusive team hinders the development of tailored solutions for different areas. Each business team has specific needs regarding the data it consumes, and the centralized model generally offers a generic approach that doesn't always meet those needs. This can lead to frustration and the perception that data is not useful or actionable in each area's specific context. These factors highlight the need for a paradigm shift in how companies manage and access data. Data Mesh proposes a solution to this lack of autonomy by decentralizing data management responsibility and empowering business areas, allowing them to own the data they produce and consume. However, this shift comes with cultural and organizational challenges that must be overcome to ensure the success of this new approach. Cultural Changes Are Necessary Adopting Data Mesh is not just about changing the data architecture; it requires a profound cultural transformation within organizations. One of the biggest shifts is decentralizing responsibility for data. In a traditional model, a central IT or data team is typically the sole entity responsible for managing, processing, and providing access to data. With Data Mesh, this responsibility shifts to the business areas themselves, who become the owners of the data they produce and consume. This cultural change can be challenging, as business teams are often not used to directly handling data governance and processing. They will need to adapt to new tools and technologies, and more importantly, to a new mindset where the use and quality of data become a priority in their daily activities. This shift requires training and the development of new skills, such as understanding data modeling and best governance practices. Another critical cultural aspect is the collaboration between business and technology teams. In the Data Mesh model, IT is no longer the single point of contact for all data-related needs. Business areas gain autonomy, but this doesn't mean that IT and data engineers become less important. On the contrary, collaboration between both sides becomes even more essential. IT must provide the tools and infrastructure for domains to operate independently, while business areas must ensure that their data meets the quality and governance standards set by the organization. This new division of responsibilities can lead to internal resistance, especially in companies accustomed to a hierarchical and centralized structure. Data teams might feel like they are losing control over governance, while business areas may feel overwhelmed by their new responsibilities. Overcoming this resistance requires strong leadership, committed to aligning the entire organization around a common goal: using data as a strategic and distributed asset. Moreover, the success of Data Mesh depends on the adoption of a culture of shared responsibility. Each domain needs to see data as a product that must be managed with the same care and attention as any other product offered to the market. This requires a clear commitment to data quality, accessibility, and usability, which can be a significant leap for areas that previously did not focus on these aspects. Not Only Cultural Changes Drive Data Mesh: What Are the Common Tools in This Ecosystem? Implementing a Data Mesh requires a robust set of tools and technologies that support data decentralization while maintaining governance, quality, and efficiency in data processing and consumption. The tools used in the Data Mesh ecosystem vary, but they generally fall into three main categories: data storage and processing platforms, orchestration and automation tools, and data governance and quality tools. Data Storage and Processing Platforms One of the foundations of Data Mesh is ensuring that each domain has control over the data it produces, which requires flexible and scalable platforms for storage and processing. Some of the most common technologies include: AWS S3 and Azure Data Lake: These storage platforms provide a flexible infrastructure for both raw and processed data, allowing domains to maintain their own data with individualized access control. They are key in giving domains autonomy over data management while offering scalable storage for vast amounts of information. Apache Kafka: Often used to manage data flow between domains, Kafka enables real-time data streaming, which is crucial for companies that need to handle large volumes of information continuously and in a decentralized manner. It facilitates the transfer of data across domains with minimal latency. Spark and Databricks: These powerful tools are used for processing large volumes of data and help scale distributed pipelines. Spark, particularly when paired with Databricks, allows domains to efficiently manage their data workflows, ensuring autonomy and high performance across different parts of the organization. Kubernetes: As a container orchestration platform, Kubernetes enables the creation of isolated execution environments where different domains can run their own data pipelines independently. It ensures that each domain has the infrastructure needed to manage its data operations without interfering with others, maintaining both autonomy and operational efficiency. Orchestration and Automation Tools For domains to manage their own data without relying on a centralized team, it is essential to have orchestration tools that automate ETL (Extract, Transform, Load) processes, data monitoring, and updates. Some of the most common tools include: Apache Airflow: An open-source tool that simplifies the automation of data pipelines, task scheduling, and workflow monitoring. It helps domains maintain their data ingestion and transformation processes without the need for continuous manual intervention. dbt (Data Build Tool): Focused on data transformation, dbt allows data analysts to perform transformations directly within the data warehouse, making it easier to implement changes to data models for each domain with greater autonomy. Prefect: Another orchestration tool, similar to Airflow, but with a focus on simplicity and flexibility in managing workflows. Prefect facilitates the implementation and maintenance of data pipelines, giving domains more control over their data processes. Data Governance and Quality Tools Decentralization brings with it a major challenge: maintaining governance and ensuring data quality across all domains. Some tools are designed to efficiently handle these challenges: Great Expectations: One of the leading data validation tools, enabling domains to implement and monitor data quality directly within ETL pipelines. This ensures that the data delivered meets expected standards, regardless of the domain. Monte Carlo: A data monitoring platform that automatically alerts users to quality issues and anomalies. It helps maintain data reliability even in a distributed environment, ensuring that potential problems are identified and resolved quickly. Collibra: Used to maintain a data catalog and implement centralized governance, even in a decentralized architecture. It helps ensure that all areas follow common governance standards, maintaining data interoperability and compliance across domains. Consumption or Self-Service Infrastructure One of the keys to the success of Data Mesh is providing business teams with a self-service infrastructure, allowing them to create, manage, and consume their own data. This involves everything from building data pipelines to using dashboards for data analysis: Tableau and Power BI: These are commonly used as data visualization and exploration tools, enabling end users to quickly and efficiently access and interpret data. Both platforms offer intuitive interfaces that allow non-technical users to create reports and dashboards, helping them derive insights and make data-driven decisions without needing extensive technical expertise. Jupyter Notebooks: Frequently used by data science teams for experimentation and analysis, Jupyter Notebooks enable domains to independently analyze data without needing intervention from central teams. This tool allows for interactive data exploration, combining code, visualizations, and narrative explanations in a single environment, making it a powerful resource for data-driven insights and experimentation. What Are the Risks of Adopting Data Mesh? Although Data Mesh brings numerous advantages, such as scalability, agility, and decentralization, its adoption also presents considerable challenges, ranging from deep cultural shifts to financial risks. These disadvantages can compromise the successful implementation of the model and, if not addressed properly, can lead to inefficiencies or even project failures. Let's explore these disadvantages in more detail: Cultural and Organizational Complexity The transition to a Data Mesh model requires a significant cultural shift in how data is managed and perceived within the company. This can be an obstacle, especially in organizations with a long-standing tradition of centralized data management. Mindset Shift: Traditionally, many companies view data as the sole responsibility of IT or a central data team. In Data Mesh, this responsibility is distributed, and business areas need to adopt a "data as a product" mentality. This shift requires domains to commit to treating their data with the same rigor as any other product they deliver. However, this transition may face resistance, especially from teams that lack technical experience in data governance and management. Training and Development: A clear disadvantage lies in the effort required to train business teams to manage and process their own data. This can include everything from using data tools to understanding best practices in governance. Companies need to invest in continuous training to ensure that teams are prepared for their new responsibilities, which can be costly and time-consuming. Internal Resistance: Implementing Data Mesh means altering the dynamics of power and responsibility within the organization. Centralized data teams may resist decentralization, fearing a loss of control over data governance. At the same time, business teams may feel overwhelmed by new responsibilities that were not previously part of their duties. Managing this resistance requires strong and well-aligned leadership to ensure a smooth transition and to address concerns from both sides effectively. Data Fragmentation and Governance One of the major concerns when adopting a decentralized architecture is the risk of data fragmentation. Without effective and federated governance, different domains may adopt divergent data standards and formats, which can lead to data silos, duplication of information, and integration challenges. Ensuring consistent governance across domains is essential to avoid these issues, as it maintains data interoperability and ensures that data remains accessible and usable across the organization. Data Inconsistency: Without clear governance, decentralization can lead to inconsistencies in data across domains. Each business area may have its own definitions and practices for collecting and processing data, creating an environment where it becomes difficult to consolidate or compare information from different parts of the company. This lack of uniformity can undermine decision-making and hinder the ability to generate comprehensive insights. Challenges in Federated Governance: Implementing efficient federated governance is one of the biggest challenges of Data Mesh. This requires the creation of data policies and standards that are followed by all domains, ensuring interoperability and quality. However, ensuring that all domains adhere to these rules, especially in large organizations, can be difficult. If governance becomes too relaxed or fragmented, the benefits of Data Mesh can be compromised, leading to inefficiencies and data management issues across the organization. High Financial Costs Implementing Data Mesh can also involve significant financial costs , both in the short and long term. This is mainly due to the need for investments in new technologies, training, and processes. Organizations must allocate resources for the acquisition and integration of tools that support decentralization, as well as for continuous training to prepare teams for their new responsibilities. Additionally, maintaining a decentralized system may require ongoing investments in infrastructure and governance to ensure smooth operations and data quality across domains. Infrastructure Investment: To ensure that each domain has the capacity to manage its own data, companies need to invest in a robust self-service infrastructure, which may include storage, processing, and data orchestration platforms. The initial cost of building this infrastructure can be high, especially if the company is currently operating under a centralized model that requires restructuring. These investments are necessary to enable domains to function independently, but they can represent a significant financial outlay in terms of both technology and implementation. Ongoing Maintenance: In addition to the initial implementation cost, maintaining a decentralized model can be more expensive than a centralized system. Each domain requires dedicated resources to manage and ensure the quality of its data, which can increase operational costs. Furthermore, tools and services to ensure federated governance and interoperability between domains require continuous updates and monitoring. These ongoing efforts add to the complexity and expense of keeping the system functioning smoothly over time. Risk of Financial Inefficiency: If the implementation of Data Mesh is poorly executed, the company may end up spending more than initially planned without reaping the expected benefits. For example, a lack of governance can lead to data duplication and redundant efforts across domains, resulting in a waste of financial and human resources. Inefficiencies like these can offset the potential advantages of Data Mesh, making it crucial to ensure proper planning, governance, and execution from the outset. Difficulty in Integration and Alignment Finally, data decentralization can lead to integration challenges between domains, especially if there is no clear alignment between business areas and the data standards established by the organization. Without consistent communication and adherence to common protocols, domains may develop disparate systems and data formats, making it harder to integrate and share data across the organization. This misalignment can hinder collaboration, slow down data-driven decision-making, and reduce the overall efficiency of the Data Mesh approach. Coordination Between Domains: With Data Mesh, each domain operates autonomously, which can create coordination challenges between teams. The lack of clear and frequent communication can result in inconsistent or incompatible data, making it difficult to perform integrated analyses across different areas of the company. Ensuring that domains collaborate effectively and align on data standards and governance practices is essential to avoid fragmentation and maintain the overall integrity of the organization's data ecosystem. Quality Standards: Maintaining a uniform quality standard across domains can be a challenge. Each business area may have a different perspective on what constitutes quality data, and without clear governance, this can result in fragmented or unreliable data. Inconsistent quality standards between domains can undermine the overall trustworthiness and usability of the data, making it difficult to rely on for decision-making or cross-domain analysis. Advantages and Disadvantages: What Are the Benefits for Companies That Have Adopted Data Mesh Compared to Those That Haven’t? When comparing a company that has adopted Data Mesh with one that still follows the traditional centralized model, several significant differences emerge, both in terms of advantages and disadvantages. This comparison helps us understand where Data Mesh may be more appropriate, as well as the challenges it can present compared to the conventional model. Speed and Agility in Delivering Insights Company with Data Mesh: By adopting Data Mesh, business areas gain autonomy to manage and access their own data. This means that instead of relying on a central data team, each domain can build and adjust its data pipelines according to its specific needs. This often leads to a significant reduction in the time required to obtain actionable insights, as business areas avoid the bottlenecks commonly found in a centralized approach. Company without Data Mesh: In the centralized approach, all data requests must go through a central team, which is often overwhelmed with multiple requests. This results in long wait times for reports, analyses, and insights. Additionally, the backlog of data requests can pile up, delaying critical business decision-making. Advantage of Data Mesh: Decentralization speeds up access to insights, making the company more agile and better equipped to respond quickly to market changesdo. Data Quality and Consistency Company with Data Mesh: In the Data Mesh model, each domain is responsible for the quality of the data it generates. While this can mean that the data is more contextualized to the domain’s needs, there is a risk of inconsistencies if federated governance is not well implemented. Each domain may adopt slightly different standards, leading to issues with data interoperability and comparability across domains. Company without Data Mesh: In a centralized model, data governance is more rigid and controlled, ensuring greater consistency across the organization. However, this also creates a bottleneck when it comes to implementing new standards or adapting data for the specific needs of different business areas. Disadvantage of Data Mesh: Decentralization can lead to data inconsistencies, especially if there is not strong enough governance to standardize practices across domains. Scalability Company with Data Mesh: Data Mesh is designed to scale efficiently in large organizations. As the company grows and new domains emerge, these domains can quickly establish their own data pipelines without overloading a central team. This allows the organization to expand without creating a bottleneck in data operations. Company without Data Mesh: In a centralized model, scalability is a major challenge. As the company grows and more areas need access to data, the centralized team becomes a bottleneck. Expanding central infrastructure can also be costly and complex, making it difficult for the company to adapt to new data volumes and types. Advantage of Data Mesh: More natural and efficient scalability, as business areas can manage their own data without relying on an overburdened central team. Operational Costs Company with Data Mesh: While Data Mesh offers greater autonomy and scalability, the operational costs can be higher initially. Implementing self-service infrastructure, decentralized governance, and training business teams to manage data can be expensive. Additionally, there are ongoing costs for maintaining quality standards and governance across domains. Company without Data Mesh: A centralized model may be cheaper in terms of maintenance and governance, as the central data team has full control over the system. However, hidden costs may arise in the form of inefficiencies and missed opportunities due to slow data delivery. Disadvantage of Data Mesh: Higher initial costs and ongoing operational expenses related to governance and maintaining decentralized infrastructure. Innovation and Experimentation Company with Data Mesh: With each domain autonomous in managing its data, there is greater flexibility to experiment with new methods of data collection and processing. Teams can adjust their approaches to meet their specific needs without waiting for approval or availability from a central IT team. This encourages a culture of innovation, where different areas can quickly test hypotheses and adapt to changes. Company without Data Mesh: In the centralized model, any experimentation or innovation with data must go through the bureaucratic process of prioritization and execution by the central team. This can delay innovation and limit the business areas' flexibility to adapt their practices quickly. Advantage of Data Mesh: Greater flexibility and innovation potential in business areas, allowing them to freely experiment with their own data. Governance and Compliance Company with Data Mesh: Maintaining governance and compliance in a decentralized architecture can be challenging. Without well-implemented federated governance, there is a risk that different domains may adopt divergent practices, which can compromise data quality and even put the company at risk of violating data protection regulations, such as GDPR or LGPD. Company without Data Mesh: In the centralized model, governance is much more controlled, and compliance with regulatory standards is managed by a single data team, reducing the risk of violations and inconsistencies. However, this can lead to a more rigid and slower approach to adapting to new regulatory requirements. Disadvantage of Data Mesh: Decentralized governance can increase the risk of regulatory non-compliance and data inconsistency. Is Data Mesh a Silver Bullet? The concept and its ideas can serve as a silver bullet for many of the challenges a centralized architecture faces when trying to keep up with the rapid growth of a company and the need for business areas to extract insights quickly. While Data Mesh is a powerful approach to solving scalability and autonomy challenges in data, it is not a universal solution. It offers significant advantages, such as decentralization and greater agility, but it also brings complex challenges, like the need for effective federated governance and high implementation costs. The primary limitation of Data Mesh is that it requires a deep cultural shift, where business areas become responsible for the quality and governance of their data. Companies that are not ready for this transformation may face data fragmentation and a lack of standardization. Moreover, Data Mesh is not suitable for all organizations. Smaller companies or those with lower data maturity may find Data Mesh overly complex and expensive, opting for simpler solutions like Data Lakes or Data Warehouses. Therefore, Data Mesh is not a silver bullet. It solves many data-related problems but is not a magical solution for all companies and situations. Its success depends on the organization's maturity and readiness to adopt a decentralized and adaptive architecture. Hope you enjoyed this post, share it, and see you next time!
- Don't Let Your Dashboards Break: Understanding DistKey and SortKey in Practice
First, About AWS Redshift? Redshift is a highly scalable cloud-based data warehouse service offered by AWS. It allows companies to quickly analyze large volumes of data using standard SQL and BI tools. Redshift's architecture is optimized for large-scale data analysis, leveraging parallelization and columnar storage for high performance. I recommend reading my post where I dive deeper into Redshift’s architecture and its components, available at Understanding AWS Redshift and Its Components . Why Use DistKey and SortKey? Understanding DistKey and SortKey in practice can provide several benefits, the most important being improved query performance. DistKey optimizes joins and aggregations by efficiently distributing data across nodes, while SortKey speeds up queries that filter and sort data, allowing Redshift to read only the necessary data blocks. Both help to make queries faster and improve resource efficiency. DistKey and How It Works DistKey (or Distribution Key) is the strategy for distributing data across the nodes of a Redshift cluster. When you define a column as a DistKey , the records sharing the same value in that column are stored on the same node, which can reduce the amount of data movement between nodes during queries. One of the main advantages is Reducing Data Movement Between Nodes , increasing query performance and improving the utilization of Redshift’s distributed processing capabilities. Pay Attention to Cardinal Choosing a column with low cardinality (few distinct values) as a DistKey can result in uneven data distribution, creating "hot nodes" (nodes overloaded with data) and degrading performance. What is Cardinality? Cardinality refers to the number of distinct values in a column. A column with high cardinality has many distinct values, making it a good candidate for a DistKey in Amazon Redshift. High cardinality tends to distribute data more evenly across nodes, avoiding overloaded nodes and ensuring balanced query performance. Although the idea behind DistKey is to distribute distinct values evenly across nodes, keep in mind that if data moves frequently between nodes, it will reduce the performance of complex queries. Therefore, it’s important to carefully choose the right column to define as a DistKey . Benefits of Using DistKey To make it clearer, here are some benefits of choosing the right DistKey strategy: Reduced Data Movement Between Nodes: When data sharing the same DistKey is stored on the same node, join and aggregation operations using that key can be performed locally on a single node. This significantly reduces the need to move data between nodes, which is one of the main factors affecting query performance in distributed systems. Better Performance in Joins and Filtered Queries: If queries frequently perform joins between tables sharing the same DistKey , keeping the data on the same node can drastically improve performance. Query response times are faster because operations don’t require data redistribution between nodes. Suppose you have two large tables in your Redshift cluster: Table A (transactions): Contains billions of customer transaction records. Table B (customers): Stores customer information. Both tables have the column client_id. If you frequently run queries joining these two tables to get transaction details by customer, defining client_id as the DistKey on both tables ensures that records for the same customer are stored on the same node. SELECT A.transaction_id, A.amount, B.customer_name FROM transactions A JOIN customers B ON A.client_id = B.client_id WHERE B.state = 'CA'; By keeping client_id on the same node, joins can be performed locally without needing to redistribute data across different nodes in the cluster. This dramatically reduces query response times. Without a DistKey , Redshift would need to redistribute data from both tables across nodes to execute the join, increasing the query’s execution time. With client_id as the DistKey , data is already located on the same node, allowing for much faster execution. Storage and Processing Efficiency: Local execution of operations on a single node, without the need for redistribution, leads to more efficient use of CPU and memory resources. This can result in better overall cluster utilization, lower costs, and higher throughput for queries. Disadvantages of Using DistKey Data Skew (Imbalanced Data Distribution): One of the biggest disadvantages is the risk of creating data imbalance across nodes, known as data skew. If the column chosen as the DistKey has low cardinality or if values are not evenly distributed, some nodes may end up storing much more data than others. This can result in overloaded nodes, degrading overall performance. Reduced Flexibility for Ad Hoc Queries: When a DistKey is defined, it optimizes specifically for queries that use that key. However, if ad hoc queries or analytical needs change, the DistKey may no longer be suitable. Changing the DistKey requires redesigning the table and possibly redistributing the data, which can be time-consuming and disruptive.o. Poor Performance in Non-Optimized Queries: If queries that don’t effectively use the DistKey are executed, performance can suffer. This is particularly relevant in scenarios where queries vary widely or don’t follow predictable patterns. While the lack of data movement between nodes is beneficial for some queries, it may also limit performance for others that require access to data distributed across all nodes. How to Create a DistKey in Practice After selecting the best strategy based on the discussion above, creating a DistKey is straightforward. Simply add the DISTKEY keyword when creating the table. CREATE TABLE sales ( sale_id INT, client_id INT DISTKEY , sale_date DATE, amount DECIMAL(10, 2) ); In the example above, the column client_id has been defined as the DistKey , optimizing queries that retrieve sales data by customer. SortKey and How It Works SortKey is the key used to determine the physical order in which data is stored in Redshift tables. Sorting data can significantly speed up queries that use filters based on the columns defined as SortKey . Benefits of SortKey Query Performance with Filters and Groupings: One of the main advantages of using SortKey is improved performance for queries applying filters (WHERE), orderings (ORDER BY), or groupings (GROUP BY) on the columns defined as SortKey . Since data is physically stored on disk in the order specified by the SortKey , Redshift can read only the necessary data blocks, instead of scanning the entire table. Reduced I/O and Increased Efficiency: With data ordered by SortKey , Redshift minimizes I/O by accessing only the relevant data blocks for a query. This is especially useful for large tables, where reading all rows would be resource-intensive. Reduced I/O results in faster query response times. Easier Management of Temporal Data: SortKeys are particularly useful for date or time columns. When you use a date column as a SortKey , queries filtering by time ranges (e.g., "last 30 days" or "this year") can be executed much faster. This approach is common in scenarios where data is queried based on dates, such as transaction logs or event records. Support for the VACUUM Command: The VACUUM command is used to reorganize data in Redshift, removing free space and applying the order defined by the SortKey . Tables with a well-defined SortKey benefit the most from this process, as VACUUM can efficiently reorganize the data, resulting in a more compact table and even faster queries. Disadvantages of Using SortKey Incorrect Choice of SortKey Column : If an inappropriate column is chosen as the SortKey , there may be no significant improvement in query performance—or worse, performance may actually degrade. For example, if the selected column is not frequently used in filters or sorting, the advantage of accessing data blocks efficiently is lost, meaning Redshift will scan more blocks, resulting in higher query latency. An example would be defining a status column (with few distinct values) as the SortKey in a table where queries typically filter by transaction_date. This would result in little to no improvement in execution time. Table Size and Reorganization In very large tables, reorganizing data to maintain SortKey efficiency can be slow and resource-intensive. This can impact system availability and overall performance. For example, when a table with billions of records needs to be reorganized due to inserts or updates that disrupt the SortKey order, the VACUUM operation can take hours or even days, depending on the table size and cluster workload. Difficulty in Changing the SortKey Changing the SortKey of an existing table can be complex and time-consuming, especially for large tables. This involves creating a new table, copying the data to the new table with the new SortKey , and then dropping the old table. In other words, if you realize that the originally chosen SortKey is no longer optimizing queries as expected, changing the SortKey may require a complete data migration, which can be highly disruptive. How to Create a SortKey in Practice Here, sale_date was defined as the SortKey, ideal for queries that filter records based on specific dates or date ranges. CREATE TABLE sales ( sale_id INT, client_id INT , sale_date DATE SORTKEY , amount DECIMAL(10, 2) ); Conclusion SortKey is highly effective for speeding up queries that filter, sort, or group data. By physically ordering the data on disk, SortKeys allow Redshift to read only the relevant data blocks, resulting in faster query response times and lower resource usage. However, choosing the wrong SortKey or failing to manage data reorganization can lead to degraded performance and increased complexity. On the other hand, DistKey is crucial for optimizing joins and aggregations across large tables. By efficiently distributing data across cluster nodes, a well-chosen DistKey can minimize data movement between nodes, significantly improving query performance. The choice of DistKey should be based on column cardinality and query patterns to avoid issues like data imbalance or "hot nodes." Both SortKey and DistKey require careful analysis and planning. Using them improperly can result in little or no performance improvement—or even worsen performance. Changing SortKeys or DistKeys can also be complex and disruptive in large tables. Therefore, the key to effectively using SortKey and DistKey in Redshift is a clear understanding of data access patterns and performance needs. With proper planning and monitoring, these tools can transform the way you manage and query data in Redshift, ensuring your dashboards and reports remain fast and efficient as data volumes grow. I hope you enjoyed this overview of Redshift’s powerful features. All points raised here are based on my team's experience in helping various areas within the organization leverage data for value delivery. I aimed to explain the importance of thinking through strategies for DistKey and SortKey in a simple and clear manner, with real-world examples to enhance understanding. Until next time!
- Understanding AWS Redshift and its components
Introduction In today's data-driven world, the ability to quickly and efficiently analyze massive datasets is more critical than ever. Enter AWS Redshift, Amazon Web Services' answer to the growing need for comprehensive data warehousing solutions. But what is AWS Redshift, and why is it becoming a staple in the arsenal of data analysts and businesses alike? At its most basic, AWS Redshift is a cloud-based service that allows users to store, query, and analyze large volumes of data. It's designed to handle petabytes of data across a cluster of servers, providing the horsepower needed for complex analytics without the need for infrastructure management typically associated with such tasks. For those who are new to the concept, you might wonder how it differs from traditional databases. Unlike conventional databases that are optimized for transaction processing, AWS Redshift is built specifically for high-speed analysis and reporting of large datasets. This focus on analytics allows Redshift to deliver insights from data at speeds much faster than traditional database systems. One of the key benefits of AWS Redshift is its scalability. You can start with just a few hundred gigabytes of data and scale up to a petabyte or more, paying only for the storage and computing power you use. This makes Redshift a cost-effective solution for companies of all sizes, from startups to global enterprises. Furthermore, AWS Redshift integrates seamlessly with other AWS services, such as S3 for data storage, Data Pipeline for data movement, and QuickSight for visualization, creating a robust ecosystem for data warehousing and analytics. This integration simplifies the process of setting up and managing your data workflows, allowing you to focus more on deriving insights and less on the underlying infrastructure. In essence, AWS Redshift democratizes data warehousing, making it accessible not just to large corporations with deep pockets but to anyone with data to analyze. Whether you're a seasoned data scientist or a business analyst looking to harness the power of your data, AWS Redshift offers a powerful, scalable, and cost-effective platform to bring your data to life. Understanding AWS Redshift and its components can help you to make decisions if you are interested to use this powerful tool, for next sections we are going to dive into Redshift and its components. Is AWS Redshift a Database? While AWS Redshift shares some characteristics with traditional databases, it's more accurately described as a data warehousing service. This distinction is crucial for understanding its primary function and capabilities. Traditional databases are designed primarily for online transaction processing (OLTP), focusing on efficiently handling a large number of short, atomic transactions. These databases excel in operations such as insert, update, delete, and query by a single row, making them ideal for applications that require real-time access to data, like e-commerce websites or banking systems. On the other hand, AWS Redshift is optimized for online analytical processing (OLAP). It's engineered to perform complex queries across large datasets, making it suitable for business intelligence, data analysis, and reporting tasks. Redshift achieves high query performance on large datasets by using columnar storage, data compression, and parallel query execution, among other techniques. So, is AWS Redshift a database? Not in the traditional sense of managing day-to-day transactions. Instead, it's a specialized data warehousing service designed to aggregate, store, and analyze vast amounts of data from multiple sources. Its strength lies in enabling users to gain insights and make informed decisions based on historical data analysis rather than handling real-time transaction processing. In summary, while Redshift has database-like functionalities, especially in data storage and query execution, its role as a data warehousing service sets it apart from conventional database systems. It's this distinction that empowers businesses to harness the full potential of their data for analytics and decision-making processes. Advantages of AWS Redshift Performance Efficiency: AWS Redshift utilizes columnar storage and data compression techniques, which significantly improve query performance by reducing the amount of I/O needed for data retrieval. This makes it exceptionally efficient for data warehousing operations. Scalability: Redshift allows you to scale your data warehouse up or down quickly to meet your computing and storage needs without downtime, ensuring that your data analysis does not get interrupted as your data volume grows. Cost-Effectiveness: With its pay-as-you-go pricing model, AWS Redshift provides a cost-effective solution for data warehousing. You only pay for the resources you use, which helps in managing costs more effectively compared to traditional data warehousing solutions. Easy to Set Up and Manage: AWS provides a straightforward setup process for Redshift, including provisioning resources and configuring your data warehouse without the need for extensive database administration expertise. Security: Redshift offers robust security features, including encryption of data in transit and at rest, network isolation using Amazon VPC, and granular permissions with AWS Identity and Access Management (IAM). Integration with AWS Ecosystem: Redshift seamlessly integrates with other AWS services, such as S3, Glue and QuickSight, enabling a comprehensive cloud solution for data processing, storage, and analysis. Massive Parallel Processing (MPP): Redshift's architecture is designed to distribute and parallelize queries across all nodes in a cluster, allowing for rapid execution of complex data analyses over large datasets. High Availability: AWS Redshift is designed for high availability and fault tolerance, with data replication across different nodes and automatic replacement of failed nodes, ensuring that your data warehouse remains operational. Disadvantages of AWS Redshift Complexity in Management: Despite AWS's efforts to simplify, managing a Redshift cluster can still be complex, especially when it comes to fine-tuning performance and managing resources efficiently. Cost at Scale: While Redshift is cost-effective for many scenarios, costs can escalate quickly with increased data volume and query complexity, especially if not optimized properly. Learning Curve: New users may find there's a significant learning curve to effectively utilize Redshift, especially those unfamiliar with data warehousing principles and SQL. Limited Concurrency: In some cases, Redshift can struggle with high concurrency scenarios where many queries are executed simultaneously, impacting performance. Maintenance Overhead: Regular maintenance tasks, such as vacuuming to reclaim space and analyze to update statistics, are necessary for optimal performance but can be cumbersome to manage. Data Load Performance: Loading large volumes of data into Redshift can be time-consuming, especially without careful management of load operations and optimizations. Cold Start Time: Starting up a new Redshift cluster or resizing an existing one can take significant time, leading to delays in data processing and analysis. AWS Redshift Architecture and Its components The architecture of AWS Redshift is a marvel of modern engineering, designed to deliver high performance and reliability. We'll explore its core components and how they interact to process and store data efficiently. Looking to the image above you can note some components since when client interact until how the data is processed through the components itself. The following we will describe each component and its importance for the functioning of Redshift: Leader Node Function: The leader node is responsible for coordinating query execution. It parses and develops execution plans for SQL queries, distributing the workload among the compute nodes. Communication: It also aggregates the results returned by the compute nodes and finalizes the query results to be returned to the client. Compute Nodes Function: These nodes are where the actual data storage and query execution take place. Each compute node contains one or more slices, which are partitions of the total dataset. Storage: Compute nodes store data in columnar format, which is optimal for analytical queries as it allows for efficient compression and fast data retrieval. Processing: They perform the operations instructed by the leader node, such as filtering, aggregating, and joining data. Node Slices Function: Slices are subdivisions of a compute node's memory and disk space, allowing the node's resources to be used more efficiently. Parallel Processing: Each slice processes its portion of the workload in parallel, which significantly speeds up query execution times. AWS Redshift Architecture and its features Redshift contains some features that helps to provide performance to data processing and compression, below we bring some of these features: Massively Parallel Processing (MPP) Architecture Function: Redshift utilizes an MPP architecture, which enables it to distribute data and query execution across all available nodes and slices. Benefit: This architecture allows Redshift to handle large volumes of data and complex analytical queries with ease, providing fast query performance. Columnar Storage Function: Data in Redshift is stored in columns rather than rows, which is ideal for data warehousing and analytics because it allows for highly efficient data compression and reduces the amount of data that needs to be read from disk for queries. Benefit: This storage format is particularly advantageous for queries that involve a subset of a table's columns, as it minimizes disk I/O requirements and speeds up query execution. Data Compression Function: Redshift automatically applies compression techniques to data stored in its columns, significantly reducing the storage space required and increasing query performance. Customization: Users can select from various compression algorithms, depending on the nature of their data, to optimize storage and performance further. Redshift Spectrum Function: An extension of Redshift's capabilities, Spectrum allows users to run queries against exabytes of data stored in Amazon S3, directly from within Redshift, without needing to load or transform the data. Benefit: This provides a seamless integration between Redshift and the broader data ecosystem in AWS, enabling complex queries across a data warehouse and data lake. Integrations with AWS Redshift Redshift's ability to integrate with various AWS services and third-party applications expands its utility and flexibility. This section highlights key integrations that enhance Redshift's data warehousing capabilities. Amazon S3 (Simple Storage Service) Amazon S3 is an object storage service offering scalability, data availability, security, and performance. Redshift can directly query and join data stored in S3, using Redshift Spectrum, without needing to load the data into Redshift tables. Users can create external tables that reference data stored in S3, allowing Redshift to access data for querying purposes. AWS Glue AWS Glue can automate the ETL process for Redshift, transforming data from various sources and loading it into Redshift tables efficiently. It can also manage the data schema in the Glue Data Catalog, which Redshift can use. As benefits, this integration simplifies data preparation, automates ETL tasks, and maintains a centralized schema catalog, resulting in reduced operational burden and faster time to insights. AWS Lambda You can use Lambda to pre-process data before loading it into Redshift or to trigger workflows based on query outputs. This integration automates data transformation and loading processes, enhancing data workflows and reducing the time spent on data preparation. Amazon DynamoDB Redshift can directly query DynamoDB tables using the Redshift Spectrum feature, enabling complex queries across your DynamoDB and Redshift data. This provides a powerful combination of real-time transactional data processing in DynamoDB with complex analytics and batch processing in Redshift, offering a more comprehensive data analysis solution. Amazon Kinesis Redshift integrates with Kinesis Data Firehose, which can load streaming data directly into Redshift tables. This integration enables real-time data analytics capabilities, allowing businesses to make quicker, informed decisions based on the latest data. Conclusion AWS Redshift exemplifies a powerful, scalable solution tailored for efficient data warehousing and complex analytics. Its integration with the broader AWS ecosystem, including S3, AWS Glue, Lambda, DynamoDB, and Amazon Kinesis, underscores its versatility and capability to streamline data workflows from ingestion to insight. Redshift's architecture, leveraging columnar storage and massively parallel processing, ensures high-speed data analysis and storage efficiency. This enables organizations to handle vast amounts of data effectively, facilitating real-time analytics and decision-making. In essence, AWS Redshift stands as a cornerstone for data-driven organizations, offering a comprehensive, future-ready platform that not only meets current analytical demands but is also poised to evolve with the advancing data landscape.
- What Data Engineers Need to Know in 2024
The Evolution of Data Engineering Data engineering has witnessed a transformative journey, evolving from simple data collection and storage to sophisticated processing and analysis. A historical overview reveals its roots in traditional database management, progressing through the advent of big data, to today's focus on real-time analytics and cloud computing. Recent advances have been catalyzed by the integration of artificial intelligence (AI) and machine learning (ML), pushing the boundaries of what's possible in data-driven decision-making. Core Skills for Data Engineers in 2024 What Data Engineers Need to Know in 2024? To thrive in 2024, data engineers must master a blend of foundational and cutting-edge skills: Programming Languages: Proficiency in languages like Python, Scala, and SQL is non-negotiable, enabling efficient data manipulation and analysis. Database Management: Understanding relational and NoSQL databases, alongside data warehousing solutions, forms the backbone of effective data storage strategies. Cloud Computing Platforms: Expertise in AWS, Google Cloud Platform, and Azure is crucial, as cloud services become central to data engineering projects. Data Modeling & ETL Processes: Developing robust data models and streamlining ETL (Extract, Transform, Load) processes are key to ensuring data quality and accessibility. Emerging Technologies and Their Impact Emerging technologies such as AI and ML, big data frameworks, and automation tools are redefining the landscape: Artificial Intelligence & Machine Learning: These technologies are vital for predictive modeling and advanced data analysis, offering unprecedented insights. Big Data Technologies: Hadoop, Spark, and Flink facilitate the handling of vast datasets, enabling scalable and efficient data processing. Automation and Orchestration Tools: Tools like Apache Airflow and Kubernetes enhance efficiency, automating workflows and data pipeline management. The Importance of Data Governance and Security With increasing data breaches and privacy concerns, data governance and security have become paramount: Regulatory Compliance: Familiarity with GDPR, CCPA, and other regulations is essential for legal compliance. Data Privacy Techniques: Implementing encryption, anonymization, and secure access controls protects sensitive information from unauthorized access. Data Engineering in the Cloud Era The shift towards cloud computing necessitates a deep understanding of cloud services and technologies: Cloud Service Providers: Navigating the offerings of major providers ensures optimal use of cloud resources. Cloud-native Technologies: Knowledge of containerization, microservices, and serverless computing is crucial for modern data engineering practices. Real-time Data Processing The ability to process and analyze data in real-time is becoming increasingly important: Streaming Data Technologies: Tools like Apache Kafka and Amazon Kinesis support high-throughput, low-latency data streams. Real-time Analytics: Techniques for real-time data analysis enable immediate insights, enhancing decision-making processes. Advanced Analytics and Business Intelligence Advanced analytics and BI tools are essential for converting data into actionable insights: Predictive Analytics: Using statistical models and machine learning to predict future trends and behaviors. Visualization Tools: Tools like Tableau and Power BI help in making complex data understandable through interactive visualizations. Career Pathways and Growth Opportunities Exploring certifications, training, and staying informed about industry demand prepares data engineers for career advancement: Certification and Training: Pursuing certifications in specific technologies or methodologies can bolster expertise and credibility. Industry Demand: Understanding the evolving market demand ensures data engineers can align their skills with future opportunities. Preparing for the Future Continuous learning and community engagement are key to staying relevant in the fast-paced field of data engineering: Continuous Learning: Embracing a mindset of lifelong learning ensures data engineers can adapt to new technologies and methodologies. Networking and Community Engagement: Participating in forums, attending conferences, and contributing to open-source projects fosters professional growth and innovation. Conclusion As data becomes increasingly, the role of data engineers in shaping the future of technology cannot be overstated. By mastering core skills, staying informed about emerging technologies, and emphasizing data governance and security, data engineers can lead the charge in leveraging data for strategic advantage in 2024 and beyond.
- Programming Language Trends for 2024: What Developers Need to Know
In the ever-evolving landscape of technology, programming languages stand as the foundational tools empowering innovation, driving progress, and shaping the digital world we inhabit. As we venture into 2024, the significance of understanding and leveraging these languages has never been more pronounced. From powering artificial intelligence to enabling seamless web development, programming languages play a pivotal role in defining the trajectory of tech trends and driving transformative change across industries. In this era of rapid technological advancement, staying abreast of the latest programming languages is not merely advantageous—it's imperative. Developers, engineers, and tech enthusiasts alike must recognize the profound impact that mastering these languages can have on their ability to navigate and thrive in the dynamic tech landscape of 2024. Programming languages serve as the building blocks of innovation, providing developers with the means to translate ideas into tangible solutions. In 2024, familiarity with cutting-edge languages equips individuals with the tools needed to push the boundaries of what's possible, whether through developing AI-driven applications, crafting immersive virtual experiences, or architecting resilient software systems. With every technological advancement comes a myriad of opportunities waiting to be seized. Whether it's capitalizing on the burgeoning fields of data science, blockchain technology, or quantum computing, proficiency in the right programming languages positions individuals to harness these opportunities and carve out their niche in the digital landscape of 2024. In an increasingly competitive job market, proficiency in in-demand programming languages can be a game-changer for career advancement. Employers across industries are seeking skilled professionals capable of leveraging the latest tools and technologies to drive business success. By staying ahead of the curve and mastering emerging languages, individuals can enhance their employability and unlock a wealth of career opportunities. For this post, I decided to write about the programming languages trends for 2024 and I hope this can be useful to you and taking the best decisions and which directions you want to follow this year in this large field. Python Python continues to maintain its position as one of the most popular and versatile programming languages. With its simplicity, readability, and extensive ecosystem of libraries and frameworks, Python is widely used in fields such as data science, artificial intelligence, web development, and automation. In 2024, Python's relevance is further amplified by its adoption in emerging technologies like machine learning, quantum computing, and the metaverse. Rust Rust has been gaining traction as a systems programming language known for its performance, safety, and concurrency features. In 2024, Rust is increasingly used in critical systems development, including operating systems, game engines, and web browsers. Its emphasis on memory safety and zero-cost abstractions makes it particularly suitable for building secure and reliable software, making it a favored choice for projects demanding high performance and robustness. TypeScript TypeScript, a superset of JavaScript with static typing, continues to see widespread adoption in web development. Its ability to catch errors at compile time, improve code maintainability, and enhance developer productivity has made it a preferred choice for building large-scale web applications. In 2024, TypeScript's popularity remains strong, driven by its integration with popular frameworks like Angular, React, and Vue.js, as well as its support for modern JavaScript features. Julia Julia, a high-level programming language designed for numerical and scientific computing, is gaining prominence in fields such as data science, computational biology, and finance. Known for its speed and ease of use, Julia combines the flexibility of dynamic languages with the performance of compiled languages, making it well-suited for tasks involving mathematical computations and large-scale data analysis. In 2024, Julia continues to attract researchers, engineers, and data scientists seeking efficient and expressive tools for scientific computing. Kotlin Kotlin, a statically-typed programming language for the Java Virtual Machine (JVM), has emerged as a popular choice for Android app development. Offering modern features, interoperability with Java, and seamless integration with popular development tools, Kotlin enables developers to build robust and efficient Android applications. In 2024, Kotlin's adoption in the Android ecosystem remains strong, driven by its developer-friendly syntax, strong tooling support, and endorsement by Google as a preferred language for Android development. Golang (Go) Go, often referred to as Golang, continues to gain traction as a language for building scalable and efficient software systems. Known for its simplicity, performance, and built-in concurrency support, Go is well-suited for developing cloud-native applications, microservices, and distributed systems. In 2024, Go's popularity is fueled by its role in enabling the development of resilient and high-performance software architectures, particularly in cloud computing, DevOps, and container orchestration. What programming languages do big tech use? Below we have an overview about programming languages that the main big techs companies are using in their stacks, so if you want to work in a Big Tech get ready to learn these languages. Conclusion In 2024, the programming landscape is characterized by a diverse set of languages, each catering to specific use cases and development requirements. From Python's versatility to Rust's performance, TypeScript's productivity to Julia's scientific computing capabilities, Kotlin's Android development to Go's system-level programming, developers have a rich array of tools at their disposal to tackle the challenges and opportunities presented by emerging technologies and industry trends. Whether building AI-powered applications, crafting scalable web services, or optimizing system performance, the choice of programming language plays a crucial role in shaping the success and impact of software projects in the dynamic tech landscape of 2024.
- Exploring the Power of Virtual Threads in Java 21
Introduction to Virtual Threads in Java 21 Concurrency has always been a cornerstone of Java programming, empowering developers to build responsive and scalable applications. However, managing threads efficiently while ensuring high performance and low resource consumption has been a perennial challenge. With the release of Java 21, a groundbreaking feature called Virtual Threads emerges as a game-changer in the world of concurrent programming. Concurrency challenges in Java and the problem with traditional threads Concurrency in Java presents developers with both immense opportunities for performance optimization and formidable challenges in ensuring thread safety and managing shared resources effectively. As applications scale and become more complex, navigating these challenges becomes increasingly crucial. Managing Shared Resources: One of the fundamental challenges in concurrent programming is managing shared resources among multiple threads. Without proper synchronization mechanisms, concurrent access to shared data can lead to data corruption and inconsistencies. Avoiding Deadlocks: Deadlocks occur when two or more threads are blocked indefinitely, waiting for each other to release resources. Identifying and preventing deadlocks is crucial for maintaining the responsiveness and stability of concurrent applications. Performance Bottlenecks: While concurrency can improve performance by leveraging multiple threads, it can also introduce overhead and contention, leading to performance bottlenecks. It's essential to carefully design concurrent algorithms and use appropriate synchronization mechanisms to minimize contention and maximize throughput. High Memory Overhead: Traditional threads in Java are implemented as native threads managed by the operating system. Each native thread consumes a significant amount of memory, typically in the range of several megabytes. This overhead becomes problematic when an application needs to create a large number of threads, as it can quickly deplete system resources. Limited Scalability: The one-to-one mapping between Java threads and native threads imposes a limit on scalability. As the number of threads increases, so does the memory overhead and the scheduling complexity. This limits the number of concurrent tasks an application can handle efficiently, hindering its scalability and responsiveness. Difficulty in Debugging and Profiling: Debugging and profiling concurrent applications built with traditional threads can be challenging due to the non-deterministic nature of thread execution and the potential for subtle timing-related bugs. Identifying and diagnosing issues such as race conditions and thread contention requires specialized tools and expertise. What are Virtual Threads? Virtual Threads represent a paradigm shift in how Java handles concurrency. Traditionally, Java applications rely on OS-level threads, which are heavyweight entities managed by the operating system. Each thread consumes significant memory resources, limiting scalability and imposing overhead on the system. Virtual Threads, on the other hand, are lightweight and managed by the Java Virtual Machine (JVM) itself. They are designed to be highly efficient, allowing thousands or even millions of virtual threads to be created without exhausting system resources. Virtual Threads offer a more scalable and responsive concurrency model compared to traditional threads. Benefits of Virtual Threads Virtual Threads come with a host of features and benefits that make them an attractive choice for modern Java applications: Lightweight: Virtual Threads have minimal memory overhead, allowing for the creation of large numbers of threads without exhausting system resources. This lightweight nature makes them ideal for highly concurrent applications. Structured Concurrency: Virtual Threads promote structured concurrency, which helps developers write more reliable and maintainable concurrent code. By enforcing clear boundaries and lifecycles for concurrent tasks, structured concurrency simplifies error handling and resource management. Improved Scalability: With Virtual Threads, developers can achieve higher scalability and throughput compared to traditional threads. The JVM's scheduler efficiently manages virtual threads, ensuring optimal utilization of system resources. Integration with CompletableFuture: Java 21 introduces seamless integration between Virtual Threads and CompletableFuture, simplifying asynchronous programming. CompletableFuture provides a fluent API for composing and chaining asynchronous tasks, making it easier to write non-blocking, responsive applications. Examples of Virtual Threads Creating and Running a Virtual Thread This example demonstrates the creation and execution of a virtual thread. We use the Thread.startVirtualThread() method to start a new virtual thread with the specified task, which prints a message indicating its execution. We then call join() on the virtual thread to wait for its completion before proceeding. CompletableFuture with Virtual Threads This example showcases the usage of virtual threads with CompletableFuture. We chain asynchronous tasks using supplyAsync(), thenApplyAsync(), and thenAcceptAsync() methods. These tasks execute in virtual threads, allowing for efficient asynchronous processing. Virtual Thread Pool Example In this example, we create a virtual thread pool using Executors.newVirtualThreadExecutor(). We then submit tasks to this pool using submit() method. Each task executes in a virtual thread, demonstrating efficient concurrency management. Using ThreadFactory with Virtual Threads Here, we demonstrate the use of a ThreadFactory with virtual threads. We create a virtual thread factory using Thread.builder().virtual().factory(), and then use it to create a fixed-size thread pool with Executors.newFixedThreadPool(). Tasks submitted to this pool execute in virtual threads created by the virtual thread factory. Virtual Thread Group Example In this final example, we demonstrate how to organize virtual threads into a thread group. We obtain a virtual thread group using Thread.builder().virtual().getThreadGroup() and then create a virtual thread within this group. The task executed by the virtual thread prints a message indicating its execution. Conclusion In conclusion, Virtual Threads introduced in Java 21 mark a significant milestone in the evolution of Java's concurrency model. By providing lightweight, scalable concurrency within the JVM, Virtual Threads address many of the limitations associated with traditional threads, offering developers a more efficient and flexible approach to concurrent programming. With Virtual Threads, developers can create and manage thousands or even millions of threads with minimal overhead, leading to improved scalability and responsiveness in Java applications. The structured concurrency model enforced by Virtual Threads simplifies error handling and resource management, making it easier to write reliable and maintainable concurrent code. Furthermore, the integration of Virtual Threads with CompletableFuture and other asynchronous programming constructs enables developers to leverage the full power of Java's concurrency framework while benefiting from the performance advantages of Virtual Threads. Overall, Virtual Threads in Java 21 represent a significant advancement that empowers developers to build highly concurrent and responsive applications with greater efficiency and scalability. As developers continue to explore and adopt Virtual Threads, we can expect to see further optimizations and enhancements that will further elevate Java's capabilities in concurrent programming.
- Listing AWS Glue tables
Using an AWS SDK is always a good option if you need to explore some feature further in search of a solution. In this post, we're going to explore some of AWS Glue using SDK and Java. Glue is an AWS ETL tool that provides a central repository of metadata, called Glue Catalog. In short, the Glue Catalog keeps the entire structure of databases and tables and their schemas in a single place. The idea of this post will be to programmatically list all the tables of a given database in the Glue Catalog using the SDK and Java. Maven dependencies In this example, we're using the Java 8 version to better explore the use of Streams in the interaction. Undestanding awsGlue object is responsible for accessing the resource through the credentials that must be configured. In this post we will not go into this detail. The getTablesRequest object is responsible for setting the request parameters, in this case, we're setting the database. getTablesResult object is responsible for listing the tables based on the parameters set by the getTablesRequest object and also for controlling the result flow. Note that in addition to returning the tables through the getTablesResult.getTableList() method, this same object returns a token that will be explained further in the next item. The token is represented by the getTablesResult.getNextToken() method, the idea of the token is to control the flow of results, as all results are paged and if there is a token for each result, it means that there is still data to be returned. In the code, we used a repetition structure based on validating the existence of the token. So, if there is still a token, it will be set in the getTableRequest object through the code getTableRequest.setNextToken(token), to return more results. It's a way to paginate results. Books to study and read If you want to learn more about and reach a high level of knowledge, I strongly recommend reading the following book(s): AWS Cookbook is a practical guide containing 70 familiar recipes about AWS resources and how to solve different challenges. It's a well-written, easy-to-understand book covering key AWS services through practical examples. AWS or Amazon Web Services is the most widely used cloud service in the world today, if you want to understand more about the subject to be well positioned in the market, I strongly recommend the study. Setup recommendations If you have interesting to know what's my setup I've used to develop my tutorials, following: Notebook Dell Inspiron 15 15.6 Monitor LG Ultrawide 29WL500-29 Well that’s it, I hope you enjoyed it!
- Creating AWS CloudWatch alarms
The use of alarms is an essential requirement when working with various resources in the cloud. It is one of the most efficient ways to monitor and understand the behavior of an application if the metrics are different than expected. In this post, we're going to create an alarm from scratch using AWS CloudWatch based on specific scenario. There are several other tools that allow us to set up alarms, but when working with AWS, setting alarms using CloudWatch is very simple and fast. Use Case Scenario To a better understanding, suppose we create a resiliency mechanism in an architecture to prevent data losses. This mechanism always acts whenever something goes wrong, like components not working as expected sending failure messages to a SQS. CloudWatch allows us to set an alarm. Thus, when a message is sent to this queue, an alarm is triggered. First of all, we need to create a queue and sending messages just to generate some metrics that we're going to use in our alarm. That's a way to simulate a production environment. After queue and alarm creation, we'll send more message for the alarms tests. Creating a SQS Queue Let's create a simple SQS queue and choose some metrics that we can use in our alarm. Thus, access the AWS console and in the search bar, type "sqs" as shown in the image below and then access the service. After accessing the service, click Create queue Let's create a Standard queue for this example and name as sqs-messages. You don't need to pay attention to the other details, just click on the Create queue button to finish it. Queue has been created, now the next step we'll send a few messages just to generate metrics. Sending messages Let's send few messages to the previously created queue, feel free to change the message content if you want to. After sending these messages, automatically will generate some metrics according to the action. In this case, a metric called NumberOfMessagesSent was created on CloudWatch and we can use it to create the alarm. Creating an Alarm For our example, let's choose the metric based on number of messages sent (NumberOfMessagesSent). Access AWS via the console and search for CloudWatch in the search bar, as shown in the image below. After accessing the service, click on the In Alarms/In alarm option in the left corner of the screen and then click on the Create alarm button. Select metric according to the screen below Choose SQS Then click Queue Metrics Search for queue name and select the metric name column item labeled NumberOfMessagesSent, then click Select Metric. Setting metrics Metric name: is the metric chosen in the previous steps. This metric measures the number of messages sent to the SQS (NumberOfMessagesSent). QueueName: Name of the SQS in which the alarm will be configured. Statistic: In this field we can choose options such as Average, Sum, Minimum and more. This will depend on the context you will need to configure the alarm and the metric. For this example we choose Sum, because we want to get the sum of the number of messages sent in a given period. Period: In this field we define the period in which the alarm will be triggered if it reaches the limit condition, which will be defined in the next steps. Setting conditions Threshlod type: For this example we will use Static. Whenever NumberOfMessagesSent is...: Let's select the Greater option Than...: In this field we will configure the number of NumberOfMessagesSent as a condition to trigger the alarm. Let's put 5. Additional configuration For additional configuration, we have the datapoints field for the alarm in which I would like to detail its operation a little more. Datapoints to alarm This additional option makes the alarm configuration more flexible, combined with the previously defined conditions. By default, this setting is: 1 of 1 How it works? The first field refers to the number of points and the second one refers to the period. Keeping the previous settings combined to the additional settings means that the alarm will be triggered if the NumberOfMessagesSent metric is greater than the sum of 5 in a period of 5 minutes. Until then, the default additional configuration does not change the previously defined settings, nothing changes. Now, let's change this setting to understand better. Let's change from: 1 of 1 to 2 of 2. This tells us that when the alarm condition is met, i.e. for the NumberOfMessagesSent metric, the sum is greater than 5. Thus, the alarm will be triggered for 2 datapoints in 10 minutes. Note that the period was multiplied due to the second field with the value 2. Summarizing, even if the condition is met, the alarm will only be triggered if there are 2 datapoints above the threshold in a period of 10 minutes. We will understand even better when we carry out some alarm activation tests. Let's keep the following settings and click Next Configuring actions On the next screen, we're going to configure the actions responsible for notifying a destination if an alarm is triggered. On this screen, we're going to keep the In alarm setting and then creating a new topic and finally, we're going to add an email in which we want to receive error notifications. Select the option Create new topic and fill in a desired name and then enter a valid email in the field Email endpoints that will receive notification ... Once completed, click Create topic and then an email will be sent to confirm subscription to the created topic. Make sure you've received an email confirmation and click Next on the alarm screen to proceed with the creation. Now, we need to add the name of the alarm in the screen below and then click on Next. The next screen will be the review screen, click on Create alarm to finish it. Okay, now we have an alarm created and it's time to test it. Alarm Testing In the beginning we sent a few messages just to generate the NumberOfMessagesSent metric but at this point, we need to send more messages that will trigger the alarm. Thus, let's send more messages and see what's going to happen. After sending the messages, notice that even if the threshold has exceeded, the alarm was not triggered. This is due to the threshold just reached 1 datapoint within the 10 minute window. Now, let's send continuous messages that exceed the threshold in short periods within the 10 minute window. Note that in the image above the alarm was triggered because in addition to having reached the condition specified in the settings, it also reached the 2 data points. Check the email added in the notification settings, probably an email was sent with the alarm details The status alarm will set to OK when the messages not exceed the threshold anymore. Books to study and read If you want to learn more about and reach a high level of knowledge, I strongly recommend reading the following book(s): AWS Cookbook is a practical guide containing 70 familiar recipes about AWS resources and how to solve different challenges. It's a well-written, easy-to-understand book covering key AWS services through practical examples. AWS or Amazon Web Services is the most widely used cloud service in the world today, if you want to understand more about the subject to be well positioned in the market, I strongly recommend the study. Well that’s it, I hope you enjoyed it!