Understanding Centralized Logging

Modern distributed applications use a large number of microservices. This helps organizations achieve loose coupling, parallel development, and easier code maintenance. However, these advantages mean an increasing complexity to debugging. Stitching together pieces of evidence from logs scattered across hundreds of modules to arrive at the root cause is a herculean task. This is where centralized logging makes a difference.

Centralized logging involves aggregating logs from several services and making them available in a single place. This provides developers with a consolidated view of all activity in a system to help identify issues quicker.

Learn More

Explore the complete Java Logging Guide series:

The single-system approach no longer works

Traditionally, logging systems dealt with activities happening in a single physical instance. However, in the world of modern microservice-based architecture, this is not enough. For example, let's say a customer complains that your application was slow at a particular time. Taking the traditional approach would mean logging into each system or service separately, analyzing them separately, and correlating them to events that occurred in other physical instances. This is not a practical approach. Centralized logging helps developers easily troubleshoot such issues by placing everything they want in one place.

Centralized logging uses logging agents to capture logs from individual services. With a centralized place to analyze logs across all services, developers can easily correlate different events, leading to faster root cause analysis. Logs can be stored for an indefinite time period, allowing developers to do deeper analysis. Centralized logging also opens up possibilities for automated monitoring and troubleshooting.

Taking the centralized logging approach

Centralized logging involves three broad steps: log capture, transformation, and transport to a logging backend.

Capture

A logging agent is deployed in each individual service, reading the application log. Logs are captured from output streams or files provided by applications.

Transform

Pluggable transformation modules modify the captured logs to formats appropriate for analysis and transport. Transformation modules are also used to filter logs so that developers can choose the relevant logs for their analysis.

Transport

The logging agent then transports the logs to a logging backend. The logging backend is a highly available service with visualization and analysis capabilities. It aggregates logs captured from all the services and stores them in a searchable form. Logging backends provide standard interfaces to connect from different log capture systems.

CrowdStrike Falcon LogScale

One logging backend option for centralized logging is a completely managed service like Falcon LogScale. LogScale can accept logs from multiple sources at the same time. It provides options for analyzing logs through different kinds of searches. It also provides dashboards and visualization features which are updated in real time.

Implementing Centralized Logging in Java

In the first part of the series, we learned about the different handlers supported by the Java default logging framework. These handlers, such as the StreamHandler or FileHandler, provide methods for integrating different logging backends.

The FileHandler helps to write logs to a file. In the following example, we’ll use the FileHandler to ship logs from your application to LogScale. To do this, we’ll use the LogScale Log Collector. After installing the collector, configuring the collector to ship your Java application logs to Falcon LogScale is straightforward.

Implementation

The first step in implementing centralized logging is to develop a FileHandler-based approach, as shown in the logging snippet below. Use this class when you want to log entries later in the application code.

import java.io.IOException;

import java.util.logging.Logger;

import java.util.logging.FileHandler;

public class FileHandlerTest {

private FileHandler handler = null;

private static Logger logger = Logger.getLogger("myapp.logging");

public FileHandlerTest(String filename) {

try {

handler = new FileHandler(filename);

logger.addHandler(handler);

} catch (IOException e) {

e.printStackTrace();

}

}

public void logMessage(String message) {

logger.info(message);

}

}

The above class initializes the FileHandler class and adds it as a handler to the Logger instance. It also contains a logMessage method that can be used to log info messages.

In the application code, you can use the following code to initialize and log information:

FileHandlerTest fileLog = new FileHandlerTest("/home/user/logs/app_%u_%g.log");

fileLog.logMessage("This is a test message")

Configuration

The next step is to configure the LogScale Log Collector to receive entries from the file and push them to LogScale. The collector configuration includes three sections:

  1. dataDirectory: Configures the details of the location where the collector will store its data.
  2. sources: Configures the directory from where logs will be gathered.
  3. sinks: Configures the details of the location where logs will be shipped.
dataDirectory: /var/lib/log-collector

sources:

java_logs:

type: file

directory: /home/user/logs/*

sink: javasink

sinks:

javasink:

type: humio

token: <INGEST-TOKEN-GOES-HERE>

url: https://cloud.community.humio.com

You can see in the url of the sinks section that we are shipping logs to an account with the Falcon LogScale Community Edition. The endpoint you use will depend on the type of Falcon LogScale account you have. You can reference the endpoints documentation for the appropriate URL to use.

Additionally, for authentication to LogScale, we include an ingest token that was generated for our LogScale log repository.

Once done, the configuration for setting up centralized logging in Java is complete. The logs from your individual applications will now be streamed to your LogScale cloud instance. LogScale indexes all the accumulated logs, preparing them for analysis.

Log your data with CrowdStrike Falcon Next-Gen SIEM

Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.

Schedule Falcon Next-Gen SIEM Demo

Arfan Sharif is a product marketing lead for the Observability portfolio at CrowdStrike. He has over 15 years experience driving Log Management, ITOps, Observability, Security and CX solutions for companies such as Splunk, Genesys and Quest Software. Arfan graduated in Computer Science at Bucks and Chilterns University and has a career spanning across Product Marketing and Sales Engineering.