A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.powertools.aws.dev/lambda/java/latest/upgrade/ below:

Upgrade guide - Powertools for AWS Lambda (Java)

Upgrade Guide End of support v1

End of support notice

On December 12th, 2025, Powertools for AWS Lambda (Java) v1 will reach end of support and will no longer receive updates or releases. If you are still using v1, we strongly recommend you to read our upgrade guide and update to the latest version.

Given our commitment to all of our customers using Powertools for AWS Lambda (Java), we will keep Maven Central v1 releases and a v1 documentation archive to prevent any disruption.

Migrate to v2 from v1

We strongly encourage you to migrate to v2. Refer to our versioning policy to learn more about our version support process.

We've made minimal breaking changes to make your transition to v2 as smooth as possible.

Quick summary

The following table shows a summary of the changes made in v2 and whether code changes are necessary. Each change that requires a code change links to a section below explaining more details.

First Steps

Before you start, we suggest making a copy of your current working project or create a new branch with git.

  1. Upgrade Java to at least version 11. While version 11 is supported, we recommend using the newest available LTS version of Java.
  2. Review the following section to confirm if you need to make changes to your code.
Redesigned Logging Utility

The logging utility was re-designed from scratch to integrate better with Java idiomatic conventions and to remove the hard dependency on log4j as logging implementation. The new logging utility now supports slfj4 as logging interface and gives you the choice among log4j2 and logback as logging implementations. Consider the following steps to migrate from the v1 logging utility to the v2 logging utility:

1. Remove powertools-logging dependency and replace it with your logging backend of choice

In order to support different logging implementations, dedicated logging modules were created for the different logging implementations. Remove powertools-logging as a dependency and replace it with either powertools-logging-log4j or powertools-logging-logback.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<!-- BEFORE v2 -->
- <dependency>
-     <groupId>software.amazon.lambda</groupId>
-     <artifactId>powertools-logging</artifactId>
-     <version>1.x.x</version>
- </dependency>

<!-- AFTER v2 -->
+ <dependency>
+     <groupId>software.amazon.lambda</groupId>
+     <artifactId>powertools-logging-log4j</artifactId>
+     <version>2.x.x</version>
+ </dependency>

The AspectJ configuration still needs to depend on powertools-logging

We have only replaced the logging implementation dependency. The AspectJ configuration still needs to depend on powertools-logging which contains the main logic.

<aspectLibrary>
    <groupId>software.amazon.lambda</groupId>
    <artifactId>powertools-logging</artifactId>
</aspectLibrary>

2. Update log4j2.xml including new JsonTemplateLayout

This step is only required if you are using log4j2 as your logging implementation. The deprecated <LambdaJsonLayout/> element was removed. Replace it with the log4j2 agnostic <JsonTemplateLayout/> element.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
    <Appenders>
        <Console name="JsonAppender" target="SYSTEM_OUT">
-           <LambdaJsonLayout compact="true" eventEol="true"/>
+           <JsonTemplateLayout eventTemplateUri="classpath:LambdaJsonLayout.json" />
        </Console>
    </Appenders>
    <Loggers>
        <Logger name="JsonLogger" level="INFO" additivity="false">
            <AppenderRef ref="JsonAppender"/>
        </Logger>
        <Root level="info">
            <AppenderRef ref="JsonAppender"/>
        </Root>
    </Loggers>
</Configuration>

3. Migrate all logging specific calls to SLF4J native primitives (recommended)

The new logging utility is designed to integrate seamlessly with Java SLF4J to allow customers adopt the Logging utility without large code refactorings. This improvement requires the migration of non-native SLF4J primitives from the v1 Logging utility.

While we recommend using SLF4J as a logging implementation independent facade, you can still use the log4j2 and logback interfaces directly.

Consider the following code example which gives you hints on how to achieve the same functionality between v1 and v2 Logging:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.lambda.powertools.logging.Logging;
// ... other imports

public class PaymentFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
    // BEFORE v2: Uses org.apache.logging.log4j.LogManager
-   private static final Logger LOGGER = LogManager.getLogger(PaymentFunction.class);
    // AFTER v2: Use org.slf4j.LoggerFactory
+   private static final Logger LOGGER = LoggerFactory.getLogger(PaymentFunction.class);

    @Logging
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...

        // BEFORE v2: Uses LoggingUtils.appendKey to append custom global keys
        // LoggingUtils was removed!
-       LoggingUtils.appendKey("cardNumber", card.getId());
        // AFTER v2: Uses native SLF4J Mapped Diagnostic Context (MDC)
+       MDC.put("cardNumber", card.getId());

        // Regular logging has not changed
        LOGGER.info("My log message with argument.");

        // Adding custom keys on a specific log message
        // BEFORE v2: No direct way, only supported via LoggingUtils.appendKey and LoggingUtils.removeKey
        // AFTER v2: Extensive support for StructuredArguments
+       LOGGER.info("Collecting payment", StructuredArguments.entry("orderId", order.getId()));
        // { "message": "Collecting payment", ..., "orderId": 123}
        Map<String, String> customKeys = new HashMap<>();
        customKeys.put("paymentId", payment.getId());
        customKeys.put("amount", payment.getAmount);
+       LOGGER.info("Payment successful", StructuredArguments.entries(customKeys));
        // { "message": "Payment successful", ..., "paymentId": 123, "amount": 12.99}
    }
}

Make sure to learn more about the advanced structured argument serialization features in the Logging v2 documentation.

Updated Metrics utility interface

The Metrics utility was redesigned to be more modular and allow for the addition of new metrics providers in the future. The same EMF-based metrics logging still applies but will be called via an updated public interface. Consider the following list to understand some of changes:

The following example shows a common Lambda handler using the Metrics utility and required refactorings.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// Metrics is not a decorator anymore but the replacement for the `MetricsLogger` Singleton
import software.amazon.lambda.powertools.metrics.Metrics;
+ import software.amazon.lambda.powertools.metrics.FlushMetrics;
- import software.amazon.lambda.powertools.metrics.MetricsUtils;
+ import software.amazon.lambda.powertools.metrics.MetricsFactory;
- import software.amazon.cloudwatchlogs.emf.logger.MetricsLogger;
- import software.amazon.cloudwatchlogs.emf.model.DimensionSet;
- import software.amazon.cloudwatchlogs.emf.model.Unit;
+ import software.amazon.lambda.powertools.metrics.model.DimensionSet;
+ import software.amazon.lambda.powertools.metrics.model.MetricUnit;

public class MetricsEnabledHandler implements RequestHandler<Object, Object> {

    // This is still a Singleton
-   MetricsLogger metricsLogger = MetricsUtils.metricsLogger();
+   Metrics metrics = MetricsFactory.getMetricsInstance();

    @Override
-   @Metrics(namespace = "ExampleApplication", service = "booking")
+   @FlushMetrics(namespace = "ExampleApplication", service = "booking")
    public Object handleRequest(Object input, Context context) {
-       metricsLogger.putDimensions(DimensionSet.of("environment", "prod"));
+       metrics.addDimension(DimensionSet.of("environment", "prod"));
        // New method overload for adding 2D dimensions more conveniently
+       metrics.addDimension("environment", "prod");
-       metricsLogger.putMetric("SuccessfulBooking", 1, Unit.COUNT);
+       metrics.addMetric("SuccessfulBooking", 1, MetricUnit.COUNT);
        ...
    }
}

Learn more about the redesigned Metrics utility in the Metrics documentation.

The deprecated captureError and captureResponse arguments to the @Tracing annotation were removed in v2 and replaced by a new captureMode parameter. The parameter can be passed an Enum value of CaptureMode.

You should update your code using the new captureMode argument:

- @Tracing(captureError = false, captureResponse = false)
+ @Tracing(captureMode = CaptureMode.DISABLED)
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent input, Context context) {
    // ...
}

Learn more about valid CaptureMode values in the Tracing documentation.

Idempotency utility split into sub-modules by provider

The Idempotency utility was split from the common powertools-idempotency package into individual packages for different persistence store providers. The main business logic is now in the powertools-idempotency-core package.

You should now include the powertools-idempotency-core package as an AspectJ library and the provider package like powertools-idempotency-dynamodb as a regular dependency.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<!-- BEFORE v2 -->
- <dependency>
-     <groupId>software.amazon.lambda</groupId>
-     <artifactId>powertools-idempotency</artifactId>
-     <version>1.x.x</version>
- </dependency>
<!-- AFTER v2 -->
<!-- In dependencies section -->
+ <dependency>
+     <groupId>software.amazon.lambda</groupId>
+     <artifactId>powertools-idempotency-dynamodb</artifactId>
+     <version>2.x.x</version>
+ </dependency>
<!-- In AspectJ configuration section -->
+ <aspectLibrary>
+     <groupId>software.amazon.lambda</groupId>
+     <artifactId>powertools-idempotency-core</artifactId>
+ </aspectLibrary>
Parameters utility split into sub-modules by provider

Parameters utilities were split from the common powertools-parameters package into individual packages for different parameter providers. You should now include the specific parameters dependency for your provider. If you use multiple providers, you can include multiple packages. Each parameter provider needs to be included as a dependency and an AspectJ library to use annotations.

This new structure reduces the bundle size of your deployment package.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<!-- BEFORE v2 -->
<!-- In dependencies section -->
- <dependency>
-     <groupId>software.amazon.lambda</groupId>
-     <artifactId>powertools-parameters</artifactId>
-     <version>1.x.x</version>
- </dependency>
<!-- In AspectJ configuration section -->
- <aspectLibrary>
-     <groupId>software.amazon.lambda</groupId>
-     <artifactId>powertools-parameters</artifactId>
- </aspectLibrary>
<!-- AFTER v2 -->
<!-- In dependencies section -->
+ <dependency>
+     <groupId>software.amazon.lambda</groupId>
+     <artifactId>powertools-parameters-secrets</artifactId>
+     <version>2.x.x</version>
+ </dependency>
<!-- ... your other providers -->
<!-- In AspectJ configuration section -->
+ <aspectLibrary>
+     <groupId>software.amazon.lambda</groupId>
+     <artifactId>powertools-parameters-secrets</artifactId>
+ </aspectLibrary>
<!-- ... your other providers -->

Find the full list of supported providers in the Parameters utility documentation.

Custom Resources updates the Response class

The Response class supporting CloudFormation Custom Resource implementations was updated to remove deprecated methods.

The Response.failed() and Response.success() methods without parameters were removed and require the physical resource ID now. You should update your code to use:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.CloudFormationCustomResourceEvent;
import software.amazon.lambda.powertools.cloudformation.AbstractCustomResourceHandler;
import software.amazon.lambda.powertools.cloudformation.Response;

public class MyCustomResourceHandler extends AbstractCustomResourceHandler {

    // ...

    @Override
    protected Response update(CloudFormationCustomResourceEvent updateEvent, Context context) {
+       String physicalResourceId = updateEvent.getPhysicalResourceId();
        UpdateResult updateResult = doUpdates(physicalResourceId);
        if (updateResult.isSuccessful()) {
-           return Response.success();
+           return Response.success(physicalResourceId);
        } else {
-           return Response.failed();
+           return Response.failed(physicalResourceId);
        }
    }

    // ...
}
Improved integration of Validation utility with other utilities

The Validation utility includes two updates that change the behavior of integration with other utilities and AWS services.

1. Updated HTTP status code when using @Validation with API Gateway

This does not require a code change in the Lambda function using the Validation utility but might impact how your calling application treats exceptions. Prior to v2, a 500 HTTP status code was returned when the validation did not pass. Consistent with the HTTP specification, a 400 status code is returned now indicating a user error instead of a server error.

Consider the following example:

import software.amazon.lambda.powertools.validation.Validation;

public class MyFunctionHandler implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    @Override
    @Validation(inboundSchema = "classpath:/schema_in.json", outboundSchema = "classpath:/schema_out.json")
    public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent input, Context context) {
        // ...
        return something;
    }
}

If the request validation fails, you can expect the following change in the HTTP response status code on the client-side:

# BEFORE v2: 500 Internal Server Error
❯ curl -s -o /dev/null -w "%{http_code}" https://{API_ID}.execute-api.{REGION}.amazonaws.com/{STAGE}/{PATH}
500
# AFTER v2: 400 Bad Request
❯ curl -s -o /dev/null -w "%{http_code}" https://{API_ID}.execute-api.{REGION}.amazonaws.com/{STAGE}/{PATH}
400

2. Integration with partial batch failures when using Batch utility

This does not require a code change but might affect the batch processing flow when using the Validation utility in combination with the Batch processing utility.

Consider the following example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse;
import com.amazonaws.services.lambda.runtime.events.SQSEvent;
import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder;
import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler;

public class SqsBatchHandler implements RequestHandler<SQSEvent, SQSBatchResponse> {

    private final BatchMessageHandler<SQSEvent, SQSBatchResponse> handler;

    public SqsBatchHandler() {
        handler = new BatchMessageHandlerBuilder()
                .withSqsBatchHandler()
                .buildWithMessageHandler(this::processMessage, Product.class);
    }

    @Override
    @Validation(inboundSchema = "classpath:/schema_in.json", outboundSchema = "classpath:/schema_out.json")
    public SQSBatchResponse handleRequest(SQSEvent sqsEvent, Context context) {
        return handler.processBatch(sqsEvent, context);
    }

    private void processMessage(Product p, Context c) {
        // Process the product
    }
}

Check if your workload can tolerate this behavior and make sure it is designed for idempotency when using partial batch item failures. We offer the Idempotency utility to simplify integration of idempotent behavior in your workloads.

AspectJ runtime not included by default anymore

The AspectJ runtime is no longer included as a transitive dependency of Powertools. For all utilities offering annotations using AspectJ compile-time weaving, you need to include the AspectJ runtime yourself now. This is also documented, with a complete example, in our installation guide. For Maven projects, make sure to add the following dependency in your dependencies section:

+ <dependency>
+     <groupId>org.aspectj</groupId>
+     <artifactId>aspectjrt</artifactId>
+     <version>1.9.22</version>
+ </dependency>

The archived documentation contains a migration guide for both large message handling using powertools-sqs and batch processing using powertools-sqs. The sections below explain the high-level steps for your convenience.

Migrating SQS Batch processing (@SqsBatch)

The batch processing library provides a way to process messages and gracefully handle partial failures for SQS, Kinesis Streams, and DynamoDB Streams batch sources. In comparison to the legacy SQS Batch library, it relies on Lambda partial batch responses, which allows the library to provide a simpler, more reliable interface for processing batches.

In order to get started, check out the new processing messages from SQS documentation. In most cases, you will simply be able to retain your existing batch message handler function, and wrap it with the new batch processing interface. Unlike the powertools-sqs module, the new powertools-batch module uses partial batch responses to communicate to Lambda which messages have been processed and must be removed from the queue. The return value of the handler's process function must be returned to Lambda.

The new library also no longer requires the SQS:DeleteMessage action on the Lambda function's role policy, as Lambda itself now manages removal of messages from the queue.

Some tuneables from powertools-sqs are no longer provided.

Migrating SQS Large message handling (@SqsLargeMessage) 2025-06-16

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4