Apache Flink versions 1.6, 1.8, and 1.11 haven't been supported by the Apache Flink community for over three years. We issued notice of this change in June 2024 and October 2024 and will now end support for these versions in Amazon Managed Service for Apache Flink.
On July 14, 2025, we'll stop your applications and place them into a READY state. You'll be able to re-start your applications at that time and continue to use your applications as normal, subject to service limits.
From July 28, 2025, we'll disable the ability to START your applications. You won't be able to start or operate your Flink version 1.6 applications from this time.
We recommend that you immediately upgrade any existing applications using Apache Flink version 1.6, 1.8, or 1.11, to Apache Flink version 1.20. This is the most recent supported Flink version. You can upgrade your applications using the in-place version upgrades feature in Amazon Managed Service for Apache Flink. For more information, see Use in-place version upgrades for Apache Flink.
If you have further questions or concerns, you can contact AWS Support.
NoteApache Flink version 1.13 has not been supported by the Apache Flink community for over three years. We now plan to end support for this version in Amazon Managed Service for Apache Flink on October 16, 2025. After this date, you will no longer be able to create, start, or run applications using Apache Flink version 1.13 in Amazon Managed Service for Apache Flink.
You can upgrade your applications statefully using the in-place version upgrades feature in Managed Service for Apache Flink. For more information, see Use in-place version upgrades for Apache Flink.
Version 1.15.2 is supported by Managed Service for Apache Flink, but is no longer supported by the Apache Flink community.
Using the Apache Flink Kinesis Streams connector with previous Apache Flink versionsThe Apache Flink Kinesis Streams connector was not included in Apache Flink prior to version 1.11. In order for your application to use the Apache Flink Kinesis connector with previous versions of Apache Flink, you must download, compile, and install the version of Apache Flink that your application uses. This connector is used to consume data from a Kinesis stream used as an application source, or to write data to a Kinesis stream used for application output.
To download and install the Apache Flink version 1.8.2 source code, do the following:
Ensure that you have Apache Maven installed, and your JAVA_HOME
environment variable points to a JDK rather than a JRE. You can test your Apache Maven install with the following command:
mvn -version
Download the Apache Flink version 1.8.2 source code:
wget https://archive.apache.org/dist/flink/flink-1.8.2/flink-1.8.2-src.tgz
Uncompress the Apache Flink source code:
tar -xvf flink-1.8.2-src.tgz
Change to the Apache Flink source code directory:
cd flink-1.8.2
Compile and install Apache Flink:
mvn clean install -Pinclude-kinesis -DskipTests
Note
If you are compiling Flink on Microsoft Windows, you need to add the -Drat.skip=true
parameter.
This section contains information about components that you use for building Managed Service for Apache Flink applications that work with Apache Flink 1.8.2.
Use the following component versions for Managed Service for Apache Flink applications:
Component Version Java 1.8 (recommended) Apache Flink 1.8.2 Managed Service for Apache Flink for Flink Runtime (aws-kinesisanalytics-runtime) 1.0.1 Managed Service for Apache Flink Flink Connectors (aws-kinesisanalytics-flink) 1.0.1 Apache Maven 3.1To compile an application using Apache Flink 1.8.2, run Maven with the following parameter:
mvn package -Dflink.version=1.8.2
For an example of a pom.xml
file for a Managed Service for Apache Flink application that uses Apache Flink version 1.8.2, see the Managed Service for Apache Flink 1.8.2 Getting Started Application.
For information about how to build and use application code for a Managed Service for Apache Flink application, see Create an application.
Building applications with Apache Flink 1.6.2This section contains information about components that you use for building Managed Service for Apache Flink applications that work with Apache Flink 1.6.2.
Use the following component versions for Managed Service for Apache Flink applications:
Component Version Java 1.8 (recommended) AWS Java SDK 1.11.379 Apache Flink 1.6.2 Managed Service for Apache Flink for Flink Runtime (aws-kinesisanalytics-runtime) 1.0.1 Managed Service for Apache Flink Flink Connectors (aws-kinesisanalytics-flink) 1.0.1 Apache Maven 3.1 Apache Beam Not supported with Apache Flink 1.6.2. NoteWhen using Managed Service for Apache Flink Runtime version 1.0.1, you specify the version of Apache Flink in your pom.xml
file rather than using the -Dflink.version
parameter when compiling your application code.
For an example of a pom.xml
file for a Managed Service for Apache Flink application that uses Apache Flink version 1.6.2, see the Managed Service for Apache Flink 1.6.2 Getting Started Application.
For information about how to build and use application code for a Managed Service for Apache Flink application, see Create an application.
Upgrading applicationsTo upgrade the Apache Flink version of an Amazon Managed Service for Apache Flink application, use the in-place Apache Flink version upgrade feature using the AWS CLI, AWS SDK, AWS CloudFormation, or the AWS Management Console. For more information, see Use in-place version upgrades for Apache Flink.
You can use this feature with any existing applications you use with Amazon Managed Service for Apache Flink in READY
or RUNNING
state.
The Apache Flink framework contains connectors for accessing data from a variety of sources.
Getting started: Flink 1.13.2This section introduces you to the fundamental concepts of Managed Service for Apache Flink and the DataStream API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application.
Components of a Managed Service for Apache Flink applicationTo process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime.
Managed Service for Apache Flink application has the following components:
Runtime properties: You can use runtime properties to configure your application without recompiling your application code.
Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources.
Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators.
Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks.
After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data.
Prerequisites for completing the exercisesTo complete the steps in this guide, you must have the following:
To get started, go to Set up an AWS account and create an administrator user.
Step 1: Set up an AWS account and create an administrator user Sign up for an AWS accountIf you do not have an AWS account, complete the following steps to create one.
To sign up for an AWS accountFollow the online instructions.
Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.
When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.
Create a user with administrative accessAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.
Sign in as the user with administrative accessTo sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.
For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide.
In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.
For instructions, see Create a permission set in the AWS IAM Identity Center User Guide.
Assign users to a group, and then assign single sign-on access to the group.
For instructions, see Add groups in the AWS IAM Identity Center User Guide.
Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.
To grant users programmatic access, choose one of the following options.
Which user needs programmatic access? To ByWorkforce identity
(Users managed in IAM Identity Center)
Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide.
For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide.
(Not recommended)
Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide.
For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide.
For AWS APIs, see Managing access keys for IAM users in the IAM User Guide.
Set up the AWS Command Line Interface (AWS CLI)
Next stepStep 2: Set up the AWS Command Line Interface (AWS CLI)
Step 2: Set up the AWS Command Line Interface (AWS CLI)In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink.
NoteThe getting started exercises in this guide assume that you are using administrator credentials (adminuser
) in your account to perform the operations.
If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command:
aws --version
The exercises in this tutorial require the following AWS CLI version or later:
aws-cli/1.16.63
To set up the AWS CLI
Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide:
Add a named profile for the administrator user in the AWS CLI config
file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide.
[profile adminuser]
aws_access_key_id = adminuser access key ID
aws_secret_access_key = adminuser secret access key
region = aws-region
For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
NoteThe example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use.
Verify the setup by entering the following help command at the command prompt:
aws help
After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup.
Next stepStep 3: Create and run a Managed Service for Apache Flink application
Step 3: Create and run a Managed Service for Apache Flink applicationIn this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink.
Create two Amazon Kinesis data streamsBefore you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
). Your application uses these streams for the application source and destination streams.
You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide.
To create the data streams (AWS CLI)To create the first stream (ExampleInputStream
), use the following Amazon Kinesis create-stream
AWS CLI command.
$ aws kinesis create-stream \
--stream-name ExampleInputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream
.
$ aws kinesis create-stream \
--stream-name ExampleOutputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Later in the tutorial, you run the stock.py
script to send data to the application.
$ python stock.py
The Java application code for this example is available from GitHub. To download the application code, do the following:
Clone the remote repository using the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted
directory.
Note the following about the application code:
A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.java
file contains the main
method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment
object.
The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties
and createSinkFromApplicationProperties
methods to create the connectors. These methods read the application's properties to configure the connectors.
For more information about runtime properties, see Use runtime properties.
In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Fulfill the prerequisites for completing the exercises.
To compile the application codeTo use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways:
Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml
file:
mvn package -Dflink.version=1.13.2
Use your development environment. See your development environment documentation for details.
NoteThe provided source code relies on libraries from Java 11.
You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP).
If there are errors while compiling, verify that your JAVA_HOME
environment variable is correctly set.
If the application compiles successfully, the following file is created:
target/aws-kinesis-analytics-java-apps-1.0.jar
In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code.
To upload the application codeOpen the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket.
Enter ka-app-code-
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.<username>
In the Configure options step, keep the settings as they are, and choose Next.
In the Set permissions step, keep the settings as they are, and choose Next.
Choose Create bucket.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step. Choose Next.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationYou can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI.
NoteWhen you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the ApplicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
Leave the version pulldown as Apache Flink version 1.13.
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Enter the following:
Group ID Key ValueProducerConfigProperties
flink.inputstream.initpos
LATEST
ProducerConfigProperties
aws.region
us-west-2
ProducerConfigProperties
AggregationEnabled
false
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationOn the MyApplication page, choose Stop. Confirm the action.
Update the applicationUsing the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code.
On the MyApplication page, choose Configure. Update the application settings and choose Update.
Create and run the application (AWS CLI)In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2
AWS CLI command to create and interact with Managed Service for Apache Flink applications.
You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read
action on the source stream, and another that grants permissions for write
actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (username
) with your account ID.012345678901
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": ["arn:aws:s3:::ka-app-code-username
",
"arn:aws:s3:::ka-app-code-username
/*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
NoteTo access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create Role.
Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role.
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a permissions policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the Managed Service for Apache Flink applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (
) with the suffix that you chose in the previous section. Replace the sample account ID (username
) in the service execution role with your account ID.012345678901
{
"ApplicationName": "test",
"ApplicationDescription": "my java test app",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901
:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "aws-kinesis-analytics-java-apps-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the CreateApplication
action with the preceding request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the ApplicationIn this section, you use the StartApplication
action to start the application.
Save the following JSON code to a file named start_request.json
.
{
"ApplicationName": "test",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the ApplicationIn this section, you use the StopApplication
action to stop the application.
Save the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "test"
}
Execute the StopApplication
action with the following request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch Logging OptionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Set up application logging in Managed Service for Apache Flink.
Update Environment PropertiesIn this section, you use the UpdateApplication
action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
Save the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "test",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication
AWS CLI action.
To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>
) with the suffix that you chose in the Create two Amazon Kinesis data streams section.
{
"ApplicationName": "test",
"CurrentApplicationVersionId": 1
,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU
"
}
}
}
}
}
Next step
Step 4: Clean up AWS resources
Step 4: Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Getting Started tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Now that you've created and run a basic Managed Service for Apache Flink application, see the following resources for more advanced Managed Service for Apache Flink solutions.
The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch alarms.
AWS Streaming Data Solution for Amazon MSK: The AWS Streaming Data Solution for Amazon MSK provides AWS CloudFormation templates where data flows through producers, streaming storage, consumers, and destinations.
Clickstream Lab with Apache Flink and Apache Kafka: An end to end lab for clickstream use cases using Amazon Managed Streaming for Apache Kafka for streaming storage and Managed Service for Apache Flink for Apache Flink applications for stream processing.
Amazon Managed Service for Apache Flink Workshop: In this workshop, you build an end-to-end streaming architecture to ingest, analyze, and visualize streaming data in near real-time. You set out to improve the operations of a taxi company in New York City. You analyze the telemetry data of a taxi fleet in New York City in near real-time to optimize their fleet operations.
Learn Flink: Hands On Training: Offical introductory Apache Flink training that gets you started writing scalable streaming ETL, analytics, and event-driven applications.
NoteBe aware that Managed Service for Apache Flink does not support the Apache Flink version (1.12) used in this training. You can use Flink 1.15.2 in Flink Managed Service for Apache Flink.
Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink.
This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.11.1.
This section introduces you to the fundamental concepts of Managed Service for Apache Flink and the DataStream API. It describes the available options for creating and testing your applications. It also provides instructions for installing the necessary tools to complete the tutorials in this guide and to create your first application.
Components of a Managed Service for Apache Flink applicationTo process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime.
An Managed Service for Apache Flink application has the following components:
Runtime properties: You can use runtime properties to configure your application without recompiling your application code.
Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources.
Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators.
Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks.
After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data.
Prerequisites for completing the exercisesTo complete the steps in this guide, you must have the following:
To get started, go to Set up an AWS account and create an administrator user.
Step 1: Set up an AWS account and create an administrator user Sign up for an AWS accountIf you do not have an AWS account, complete the following steps to create one.
To sign up for an AWS accountFollow the online instructions.
Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.
When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.
Create a user with administrative accessAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.
Sign in as the user with administrative accessTo sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.
For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide.
In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.
For instructions, see Create a permission set in the AWS IAM Identity Center User Guide.
Assign users to a group, and then assign single sign-on access to the group.
For instructions, see Add groups in the AWS IAM Identity Center User Guide.
Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.
To grant users programmatic access, choose one of the following options.
Which user needs programmatic access? To ByWorkforce identity
(Users managed in IAM Identity Center)
Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide.
For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide.
(Not recommended)
Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide.
For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide.
For AWS APIs, see Managing access keys for IAM users in the IAM User Guide.
Set up the AWS Command Line Interface (AWS CLI)
Step 2: Set up the AWS Command Line Interface (AWS CLI)In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink.
NoteThe getting started exercises in this guide assume that you are using administrator credentials (adminuser
) in your account to perform the operations.
If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command:
aws --version
The exercises in this tutorial require the following AWS CLI version or later:
aws-cli/1.16.63
To set up the AWS CLI
Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide:
Add a named profile for the administrator user in the AWS CLI config
file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide.
[profile adminuser]
aws_access_key_id = adminuser access key ID
aws_secret_access_key = adminuser secret access key
region = aws-region
For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
NoteThe example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use.
Verify the setup by entering the following help command at the command prompt:
aws help
After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup.
Next stepStep 3: Create and run a Managed Service for Apache Flink application
Step 3: Create and run a Managed Service for Apache Flink applicationIn this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink.
Create two Amazon Kinesis data streamsBefore you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
). Your application uses these streams for the application source and destination streams.
You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide.
To create the data streams (AWS CLI)To create the first stream (ExampleInputStream
), use the following Amazon Kinesis create-stream
AWS CLI command.
$ aws kinesis create-stream \
--stream-name ExampleInputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream
.
$ aws kinesis create-stream \
--stream-name ExampleOutputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
"EVENT_TIME": datetime.datetime.now().isoformat(),
"TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]),
"PRICE": round(random.random() * 100, 2),
}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
)
if __name__ == "__main__":
generate(STREAM_NAME, boto3.client("kinesis"))
Later in the tutorial, you run the stock.py
script to send data to the application.
$ python stock.py
The Java application code for this example is available from GitHub. To download the application code, do the following:
Clone the remote repository using the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted
directory.
Note the following about the application code:
A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.java
file contains the main
method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment
object.
The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties
and createSinkFromApplicationProperties
methods to create the connectors. These methods read the application's properties to configure the connectors.
For more information about runtime properties, see Use runtime properties.
In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Fulfill the prerequisites for completing the exercises.
To compile the application codeTo use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways:
Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml
file:
mvn package -Dflink.version=1.11.3
Use your development environment. See your development environment documentation for details.
NoteThe provided source code relies on libraries from Java 11. Ensure that your project's Java version is 11.
You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP).
If there are errors while compiling, verify that your JAVA_HOME
environment variable is correctly set.
If the application compiles successfully, the following file is created:
target/aws-kinesis-analytics-java-apps-1.0.jar
In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code.
To upload the application codeOpen the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket.
Enter ka-app-code-
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.<username>
In the Configure options step, keep the settings as they are, and choose Next.
In the Set permissions step, keep the settings as they are, and choose Next.
Choose Create bucket.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step. Choose Next.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationYou can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI.
NoteWhen you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
Leave the version pulldown as Apache Flink version 1.11 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, for Group ID, enter ProducerConfigProperties
.
Enter the following application properties and values:
Group ID Key ValueProducerConfigProperties
flink.inputstream.initpos
LATEST
ProducerConfigProperties
aws.region
us-west-2
ProducerConfigProperties
AggregationEnabled
false
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationOn the MyApplication page, choose Stop. Confirm the action.
Update the applicationUsing the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code.
On the MyApplication page, choose Configure. Update the application settings and choose Update.
Create and run the application (AWS CLI)In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. a Managed Service for Apache Flink uses the kinesisanalyticsv2
AWS CLI command to create and interact with Managed Service for Apache Flink applications.
You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read
action on the source stream, and another that grants permissions for write
actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (username
) with your account ID.012345678901
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": ["arn:aws:s3:::ka-app-code-username
",
"arn:aws:s3:::ka-app-code-username
/*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
NoteTo access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed.
Create an IAM RoleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create Role.
Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role.
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the Managed Service for Apache Flink applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (
) with the suffix that you chose in the previous section. Replace the sample account ID (username
) in the service execution role with your account ID.012345678901
{
"ApplicationName": "test",
"ApplicationDescription": "my java test app",
"RuntimeEnvironment": "FLINK-1_11",
"ServiceExecutionRole": "arn:aws:iam::012345678901
:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "aws-kinesis-analytics-java-apps-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the CreateApplication
action with the preceding request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication
action to start the application.
Save the following JSON code to a file named start_request.json
.
{
"ApplicationName": "test",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication
action to stop the application.
Save the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "test"
}
Execute the StopApplication
action with the following request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Set up application logging in Managed Service for Apache Flink.
Update environment propertiesIn this section, you use the UpdateApplication
action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
Save the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "test",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication
AWS CLI action.
To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>
) with the suffix that you chose in the Create two Amazon Kinesis data streams section.
{
"ApplicationName": "test",
"CurrentApplicationVersionId": 1
,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU
"
}
}
}
}
}
Next step
Step 4: Clean up AWS resources
Step 4: Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Getting Started tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Now that you've created and run a basic Managed Service for Apache Flink application, see the following resources for more advanced Managed Service for Apache Flink solutions.
The AWS Streaming Data Solution for Amazon Kinesis: The AWS Streaming Data Solution for Amazon Kinesis automatically configures the AWS services necessary to easily capture, store, process, and deliver streaming data. The solution provides multiple options for solving streaming data use cases. The Managed Service for Apache Flink option provides an end-to-end streaming ETL example demonstrating a real-world application that runs analytical operations on simulated New York taxi data. The solution sets up all necessary AWS resources such as IAM roles and policies, a CloudWatch dashboard, and CloudWatch alarms.
AWS Streaming Data Solution for Amazon MSK: The AWS Streaming Data Solution for Amazon MSK provides AWS CloudFormation templates where data flows through producers, streaming storage, consumers, and destinations.
Clickstream Lab with Apache Flink and Apache Kafka: An end to end lab for clickstream use cases using Amazon Managed Streaming for Apache Kafka for streaming storage and Managed Service for Apache Flink for Apache Flink applications for stream processing.
Amazon Managed Service for Apache Flink Workshop: In this workshop, you build an end-to-end streaming architecture to ingest, analyze, and visualize streaming data in near real-time. You set out to improve the operations of a taxi company in New York City. You analyze the telemetry data of a taxi fleet in New York City in near real-time to optimize their fleet operations.
Learn Flink: Hands On Training: Offical introductory Apache Flink training that gets you started writing scalable streaming ETL, analytics, and event-driven applications.
NoteBe aware that Managed Service for Apache Flink does not support the Apache Flink version (1.12) used in this training. You can use Flink 1.15.2 in Flink Managed Service for Apache Flink.
Apache Flink Code Examples: A GitHub repository of a wide variety of Apache Flink application examples.
Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink.
This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.8.2.
Components of Managed Service for Apache Flink applicationTo process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime.
An Managed Service for Apache Flink application has the following components:
Runtime properties: You can use runtime properties to configure your application without recompiling your application code.
Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources.
Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators.
Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks.
After you create, compile, and package your application code, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data.
Prerequisites for completing the exercisesTo complete the steps in this guide, you must have the following:
To get started, go to Step 1: Set up an AWS account and create an administrator user.
Step 1: Set up an AWS account and create an administrator user Sign up for an AWS accountIf you do not have an AWS account, complete the following steps to create one.
To sign up for an AWS accountFollow the online instructions.
Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.
When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.
Create a user with administrative accessAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.
Sign in as the user with administrative accessTo sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.
For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide.
In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.
For instructions, see Create a permission set in the AWS IAM Identity Center User Guide.
Assign users to a group, and then assign single sign-on access to the group.
For instructions, see Add groups in the AWS IAM Identity Center User Guide.
Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.
To grant users programmatic access, choose one of the following options.
Which user needs programmatic access? To ByWorkforce identity
(Users managed in IAM Identity Center)
Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide.
For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide.
(Not recommended)
Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide.
For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide.
For AWS APIs, see Managing access keys for IAM users in the IAM User Guide.
In this step, you download and configure the AWS CLI to use with Managed Service for Apache Flink.
NoteThe getting started exercises in this guide assume that you are using administrator credentials (adminuser
) in your account to perform the operations.
If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command:
aws --version
The exercises in this tutorial require the following AWS CLI version or later:
aws-cli/1.16.63
To set up the AWS CLI
Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide:
Add a named profile for the administrator user in the AWS CLI config
file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide.
[profile adminuser]
aws_access_key_id = adminuser access key ID
aws_secret_access_key = adminuser secret access key
region = aws-region
For a list of available Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
NoteThe example code and commands in this tutorial use the US West (Oregon) Region. To use a different AWS Region, change the Region in the code and commands for this tutorial to the Region you want to use.
Verify the setup by entering the following help command at the command prompt:
aws help
After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup.
Next stepStep 3: Create and run a Managed Service for Apache Flink application
Step 3: Create and run a Managed Service for Apache Flink applicationIn this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink.
Create two Amazon Kinesis data streamsBefore you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
). Your application uses these streams for the application source and destination streams.
You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide.
To create the data streams (AWS CLI)To create the first stream (ExampleInputStream
), use the following Amazon Kinesis create-stream
AWS CLI command.
$ aws kinesis create-stream \
--stream-name ExampleInputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream
.
$ aws kinesis create-stream \
--stream-name ExampleOutputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
"EVENT_TIME": datetime.datetime.now().isoformat(),
"TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]),
"PRICE": round(random.random() * 100, 2),
}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
)
if __name__ == "__main__":
generate(STREAM_NAME, boto3.client("kinesis"))
Later in the tutorial, you run the stock.py
script to send data to the application.
$ python stock.py
The Java application code for this example is available from GitHub. To download the application code, do the following:
Clone the remote repository using the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted_1_8
directory.
Note the following about the application code:
A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.java
file contains the main
method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment
object.
The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties
and createSinkFromApplicationProperties
methods to create the connectors. These methods read the application's properties to configure the connectors.
For more information about runtime properties, see Use runtime properties.
In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Prerequisites for completing the exercises.
To compile the application codeTo use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways:
Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml
file:
mvn package -Dflink.version=1.8.2
Use your development environment. See your development environment documentation for details.
NoteThe provided source code relies on libraries from Java 1.8. Ensure that your project's Java version is 1.8.
You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP).
If there are errors while compiling, verify that your JAVA_HOME
environment variable is correctly set.
If the application compiles successfully, the following file is created:
target/aws-kinesis-analytics-java-apps-1.0.jar
In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code.
To upload the application codeOpen the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket.
Enter ka-app-code-
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.<username>
In the Configure options step, keep the settings as they are, and choose Next.
In the Set permissions step, keep the settings as they are, and choose Next.
Choose Create bucket.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step. Choose Next.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationYou can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI.
NoteWhen you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
Leave the version pulldown as Apache Flink 1.8 (Recommended Version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Enter the following application properties and values:
Group ID Key ValueProducerConfigProperties
flink.inputstream.initpos
LATEST
ProducerConfigProperties
aws.region
us-west-2
ProducerConfigProperties
AggregationEnabled
false
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
On the MyApplication page, choose Run. Confirm the action.
When the application is running, refresh the page. The console shows the Application graph.
On the MyApplication page, choose Stop. Confirm the action.
Update the applicationUsing the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code.
On the MyApplication page, choose Configure. Update the application settings and choose Update.
Create and run the application (AWS CLI)In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2
AWS CLI command to create and interact with Managed Service for Apache Flink applications.
You must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read
action on the source stream, and another that grants permissions for write
actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (username
) with your account ID.012345678901
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": ["arn:aws:s3:::ka-app-code-username
",
"arn:aws:s3:::ka-app-code-username
/*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
NoteTo access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create Role.
Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role.
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the Managed Service for Apache Flink applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (
) with the suffix that you chose in the previous section. Replace the sample account ID (username
) in the service execution role with your account ID.012345678901
{
"ApplicationName": "test",
"ApplicationDescription": "my java test app",
"RuntimeEnvironment": "FLINK-1_8",
"ServiceExecutionRole": "arn:aws:iam::012345678901
:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "aws-kinesis-analytics-java-apps-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the CreateApplication
action with the preceding request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication
action to start the application.
Save the following JSON code to a file named start_request.json
.
{
"ApplicationName": "test",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication
action to stop the application.
Save the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "test"
}
Execute the StopApplication
action with the following request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Set up application logging in Managed Service for Apache Flink.
Update environment propertiesIn this section, you use the UpdateApplication
action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
Save the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "test",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication
AWS CLI action.
To load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>
) with the suffix that you chose in the Create two Amazon Kinesis data streams section.
{
"ApplicationName": "test",
"CurrentApplicationVersionId": 1
,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "aws-kinesis-analytics-java-apps-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU
"
}
}
}
}
}
Next step
Step 4: Clean up AWS resources
Step 4: Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Getting Started tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Managed Service for Apache Flink panel, choose MyApplication.
Choose Configure.
In the Snapshots section, choose Disable and then choose Update.
In the application's page, choose Delete and then confirm the deletion.
Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Apache Flink versions 1.6, 1.8, and 1.11 have not been supported by the Apache Flink community for over three years. We plan to deprecate these versions in Amazon Managed Service for Apache Flink on November 5, 2024. Starting from this date, you will not be able to create new applications for these Flink versions. You can continue running existing applications at this time. You can upgrade your applications statefully using the in-place version upgrades feature in Amazon Managed Service for Apache Flink For more information, see Use in-place version upgrades for Apache Flink.
This topic contains a version of the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial that uses Apache Flink 1.6.2.
Components of a Managed Service for Apache Flink applicationTo process data, your Managed Service for Apache Flink application uses a Java/Apache Maven or Scala application that processes input and produces output using the Apache Flink runtime.
a Managed Service for Apache Flink has the following components:
Runtime properties: You can use runtime properties to configure your application without recompiling your application code.
Source: The application consumes data by using a source. A source connector reads data from a Kinesis data stream, an Amazon S3 bucket, etc. For more information, see Add streaming data sources.
Operators: The application processes data by using one or more operators. An operator can transform, enrich, or aggregate data. For more information, see Operators.
Sink: The application produces data to external sources by using sinks. A sink connector writes data to a Kinesis data stream, a Firehose stream, an Amazon S3 bucket, etc. For more information, see Write data using sinks.
After you create, compile, and package your application, you upload the code package to an Amazon Simple Storage Service (Amazon S3) bucket. You then create a Managed Service for Apache Flink application. You pass in the code package location, a Kinesis data stream as the streaming data source, and typically a streaming or file location that receives the application's processed data.
Prerequisites for completing the exercisesTo complete the steps in this guide, you must have the following:
Java Development Kit (JDK) version 8. Set the JAVA_HOME
environment variable to point to your JDK install location.
We recommend that you use a development environment (such as Eclipse Java Neon or IntelliJ Idea) to develop and compile your application.
Git Client. Install the Git client if you haven't already.
Apache Maven Compiler Plugin. Maven must be in your working path. To test your Apache Maven installation, enter the following:
$ mvn -version
To get started, go to Step 1: Set up an AWS account and create an administrator user.
Step 1: Set up an AWS account and create an administrator user Sign up for an AWS accountIf you do not have an AWS account, complete the following steps to create one.
To sign up for an AWS accountFollow the online instructions.
Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.
When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.
Create a user with administrative accessAfter you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.
Sign in as the user with administrative accessTo sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.
For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide.
In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.
For instructions, see Create a permission set in the AWS IAM Identity Center User Guide.
Assign users to a group, and then assign single sign-on access to the group.
For instructions, see Add groups in the AWS IAM Identity Center User Guide.
Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.
To grant users programmatic access, choose one of the following options.
Which user needs programmatic access? To ByWorkforce identity
(Users managed in IAM Identity Center)
Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Configuring the AWS CLI to use AWS IAM Identity Center in the AWS Command Line Interface User Guide.
For AWS SDKs, tools, and AWS APIs, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide.
(Not recommended)
Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs.Following the instructions for the interface that you want to use.
For the AWS CLI, see Authenticating using IAM user credentials in the AWS Command Line Interface User Guide.
For AWS SDKs and tools, see Authenticate using long-term credentials in the AWS SDKs and Tools Reference Guide.
For AWS APIs, see Managing access keys for IAM users in the IAM User Guide.
In this step, you download and configure the AWS CLI to use with a Managed Service for Apache Flink.
NoteThe getting started exercises in this guide assume that you are using administrator credentials (adminuser
) in your account to perform the operations.
If you already have the AWS CLI installed, you might need to upgrade to get the latest functionality. For more information, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. To check the version of the AWS CLI, run the following command:
aws --version
The exercises in this tutorial require the following AWS CLI version or later:
aws-cli/1.16.63
To set up the AWS CLI
Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide:
Add a named profile for the administrator user in the AWS CLI config
file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see Named Profiles in the AWS Command Line Interface User Guide.
[profile adminuser]
aws_access_key_id = adminuser access key ID
aws_secret_access_key = adminuser secret access key
region = aws-region
For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
NoteThe example code and commands in this tutorial use the US West (Oregon) Region. To use a different Region, change the Region in the code and commands for this tutorial to the Region you want to use.
Verify the setup by entering the following help command at the command prompt:
aws help
After you set up an AWS account and the AWS CLI, you can try the next exercise, in which you configure a sample application and test the end-to-end setup.
Next stepStep 3: Create and run a Managed Service for Apache Flink application
Step 3: Create and run a Managed Service for Apache Flink applicationIn this exercise, you create a Managed Service for Apache Flink application with data streams as a source and a sink.
Create two Amazon Kinesis data streamsBefore you create a Managed Service for Apache Flink application for this exercise, create two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
). Your application uses these streams for the application source and destination streams.
You can create these streams using either the Amazon Kinesis console or the following AWS CLI command. For console instructions, see Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide.
To create the data streams (AWS CLI)To create the first stream (ExampleInputStream
), use the following Amazon Kinesis create-stream
AWS CLI command.
$ aws kinesis create-stream \
--stream-name ExampleInputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
To create the second stream that the application uses to write output, run the same command, changing the stream name to ExampleOutputStream
.
$ aws kinesis create-stream \
--stream-name ExampleOutputStream \
--shard-count 1 \
--region us-west-2 \
--profile adminuser
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
"EVENT_TIME": datetime.datetime.now().isoformat(),
"TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]),
"PRICE": round(random.random() * 100, 2),
}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
)
if __name__ == "__main__":
generate(STREAM_NAME, boto3.client("kinesis"))
Later in the tutorial, you run the stock.py
script to send data to the application.
$ python stock.py
The Java application code for this example is available from GitHub. To download the application code, do the following:
Clone the remote repository using the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/GettingStarted_1_6
directory.
Note the following about the application code:
A Project Object Model (pom.xml) file contains information about the application's configuration and dependencies, including the a Managed Service for Apache Flink libraries.
The BasicStreamingJob.java
file contains the main
method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
Your application creates source and sink connectors to access external resources using a StreamExecutionEnvironment
object.
The application creates source and sink connectors using static properties. To use dynamic application properties, use the createSourceFromApplicationProperties
and createSinkFromApplicationProperties
methods to create the connectors. These methods read the application's properties to configure the connectors.
For more information about runtime properties, see Use runtime properties.
In this section, you use the Apache Maven compiler to create the Java code for the application. For information about installing Apache Maven and the Java Development Kit (JDK), see Prerequisites for completing the exercises.
NoteIn order to use the Kinesis connector with versions of Apache Flink prior to 1.11, you need to download the source code for the connector and build it as described in the Apache Flink documentation.
To compile the application codeTo use your application code, you compile and package it into a JAR file. You can compile and package your code in one of two ways:
Use the command-line Maven tool. Create your JAR file by running the following command in the directory that contains the pom.xml
file:
mvn package
Use your development environment. See your development environment documentation for details.
You can either upload your package as a JAR file, or you can compress your package and upload it as a ZIP file. If you create your application using the AWS CLI, you specify your code content type (JAR or ZIP).
If there are errors while compiling, verify that your JAVA_HOME
environment variable is correctly set.
If the application compiles successfully, the following file is created:
target/aws-kinesis-analytics-java-apps-1.0.jar
In this section, you create an Amazon Simple Storage Service (Amazon S3) bucket and upload your application code.
To upload the application codeOpen the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket.
Enter ka-app-code-
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.<username>
In the Configure options step, keep the settings as they are, and choose Next.
In the Set permissions step, keep the settings as they are, and choose Next.
Choose Create bucket.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step. Choose Next.
In the Set permissions step, keep the settings as they are. Choose Next.
In the Set properties step, keep the settings as they are. Choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationYou can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI.
NoteWhen you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.8.2 or 1.6.2.
Change the version pulldown to Apache Flink 1.6.
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/java-getting-started-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter java-getting-started-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Enter the following application properties and values:
Group ID Key ValueProducerConfigProperties
flink.inputstream.initpos
LATEST
ProducerConfigProperties
aws.region
us-west-2
ProducerConfigProperties
AggregationEnabled
false
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
On the MyApplication page, choose Run. Confirm the action.
When the application is running, refresh the page. The console shows the Application graph.
On the MyApplication page, choose Stop. Confirm the action.
Update the applicationUsing the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR. You can also reload the application JAR from the Amazon S3 bucket if you need to update the application code.
On the MyApplication page, choose Configure. Update the application settings and choose Update.
Create and run the application (AWS CLI)In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application. Managed Service for Apache Flink uses the kinesisanalyticsv2
AWS CLI command to create and interact with Managed Service for Apache Flink applications.
First, you create a permissions policy with two statements: one that grants permissions for the read
action on the source stream, and another that grants permissions for write
actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (username
) with your account ID.012345678901
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": ["arn:aws:s3:::ka-app-code-username
",
"arn:aws:s3:::ka-app-code-username
/*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
NoteTo access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create Role.
Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role.
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a permissions policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the Managed Service for Apache Flink applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (
) with the suffix that you chose in the previous section. Replace the sample account ID (username
) in the service execution role with your account ID.012345678901
{
"ApplicationName": "test",
"ApplicationDescription": "my java test app",
"RuntimeEnvironment": "FLINK-1_6",
"ServiceExecutionRole": "arn:aws:iam::012345678901
:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "java-getting-started-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the CreateApplication
action with the preceding request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication
action to start the application.
Save the following JSON code to a file named start_request.json
.
{
"ApplicationName": "test",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication
action to stop the application.
Save the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "test"
}
Execute the StopApplication
action with the following request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Set up application logging in Managed Service for Apache Flink.
Update environment propertiesIn this section, you use the UpdateApplication
action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
Save the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "test",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"flink.stream.initpos" : "LATEST",
"aws.region" : "us-west-2",
"AggregationEnabled" : "false"
}
},
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication
AWS CLI action.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>
) with the suffix that you chose in the Create two Amazon Kinesis data streams section.
{
"ApplicationName": "test",
"CurrentApplicationVersionId": 1
,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "java-getting-started-1.0.jar"
}
}
}
}
}
Step 4: Clean up AWS resources
This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Managed Service for Apache Flink panel, choose MyApplication.
Choose Configure.
In the Snapshots section, choose Disable and then choose Update.
In the application's page, choose Delete and then confirm the deletion.
Open the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
This section provides examples of creating and working with applications in Managed Service for Apache Flink. They include example code and step-by-step instructions to help you create Managed Service for Apache Flink applications and test your results.
Before you explore these examples, we recommend that you first review the following:
NoteThese examples assume that you are using the US West (Oregon) Region (us-west-2
). If you are using a different Region, update your application code, commands, and IAM roles appropriately.
The following examples demonstrate how to create applications using the Apache Flink DataStream API.
Example: Tumbling windowIn this exercise, you create a Managed Service for Apache Flink application that aggregates data using a tumbling window. Aggregration is enabled by default in Flink. To disable it, use the following:
sink.producer.aggregation-enabled' = 'false'
Create dependent resources
Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
Two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
)
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream
and ExampleOutputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/TumblingWindow
directory.
The application code is located in the TumblingWindowStreamingJob.java
file. Note the following about the application code:
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
Add the following import statement:
import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows; //flink 1.13 onward
The application uses the timeWindow
operator to find the count of values for each stock symbol over a 5-second tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink:
input.flatMap(new Tokenizer()) // Tokenizer for generating words
.keyBy(0) // Logically partition the stream for each word
.window(TumblingProcessingTimeWindows.of(Time.seconds(5))) //Flink 1.13 onward
.sum(1) // Sum the number of words per partition
.map(value -> value.f0 + "," + value.f1.toString() + "\n")
.addSink(createSinkFromStaticConfig());
To compile the application, do the following:
Compiling the application creates the application JAR file (target/aws-kinesis-analytics-java-apps-1.0.jar
).
In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Run the applicationOn the MyApplication page, choose Run. Leave the Run without snapshot option selected, and confirm the action.
When the application is running, refresh the page. The console shows the Application graph.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
Two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
).
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream
and ExampleOutputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
"EVENT_TIME": datetime.datetime.now().isoformat(),
"TICKER": random.choice(["AAPL", "AMZN", "MSFT", "INTC", "TBV"]),
"PRICE": round(random.random() * 100, 2),
}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
)
if __name__ == "__main__":
generate(STREAM_NAME, boto3.client("kinesis"))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/SlidingWindow
directory.
The application code is located in the SlidingWindowStreamingJobWithParallelism.java
file. Note the following about the application code:
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
The application uses the timeWindow
operator to find the minimum value for each stock symbol over a 10-second window that slides by 5 seconds. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink:
Add the following import statement:
import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows; //flink 1.13 onward
The application uses the timeWindow
operator to find the count of values for each stock symbol over a 5-second tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink:
input.flatMap(new Tokenizer()) // Tokenizer for generating words
.keyBy(0) // Logically partition the stream for each word
.window(TumblingProcessingTimeWindows.of(Time.seconds(5))) //Flink 1.13 onward
.sum(1) // Sum the number of words per partition
.map(value -> value.f0 + "," + value.f1.toString() + "\n")
.addSink(createSinkFromStaticConfig());
To compile the application, do the following:
Compiling the application creates the application JAR file (target/aws-kinesis-analytics-java-apps-1.0.jar
).
In this section, you upload your application code to the Amazon S3 bucket that you created in the Create dependent resources section.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and then choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Configure the application parallelismThis application example uses parallel execution of tasks. The following application code sets the parallelism of the min
operator:
.setParallelism(3) // Set parallelism for the min operator
The application parallelism can't be greater than the provisioned parallelism, which has a default of 1. To increase your application's parallelism, use the following AWS CLI action:
aws kinesisanalyticsv2 update-application
--application-name MyApplication
--current-application-version-id <VersionId>
--application-configuration-update "{\"FlinkApplicationConfigurationUpdate\": { \"ParallelismConfigurationUpdate\": {\"ParallelismUpdate\": 5, \"ConfigurationTypeUpdate\": \"CUSTOM\" }}}"
You can retrieve the current application version ID using the DescribeApplication or ListApplications actions.
Run the applicationThe Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
In this exercise, you create a Managed Service for Apache Flink that has a Kinesis data stream as a source and an Amazon S3 bucket as a sink. Using the sink, you can verify the output of the application in the Amazon S3 console.
Create dependent resourcesBefore you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources:
A Kinesis data stream (ExampleInputStream
).
An Amazon S3 bucket to store the application's code and output (ka-app-code-
)<username>
Managed Service for Apache Flink cannot write data to Amazon S3 with server-side encryption enabled on Managed Service for Apache Flink.
You can create the Kinesis stream and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
. Create two folders (<username>
code
and data
) in the Amazon S3 bucket.
The application creates the following CloudWatch resources if they don't already exist:
A log group called /AWS/KinesisAnalytics-java/MyApplication
.
A log stream called kinesis-analytics-log-stream
.
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/S3Sink
directory.
The application code is located in the S3StreamingSinkJob.java
file. Note the following about the application code:
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
You need to add the following import statement:
import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
The application uses an Apache Flink S3 sink to write to Amazon S3.
The sink reads messages in a tumbling window, encodes messages into S3 bucket objects, and sends the encoded objects to the S3 sink. The following code encodes objects for sending to Amazon S3:
input.map(value -> { // Parse the JSON
JsonNode jsonNode = jsonParser.readValue(value, JsonNode.class);
return new Tuple2<>(jsonNode.get("ticker").toString(), 1);
}).returns(Types.TUPLE(Types.STRING, Types.INT))
.keyBy(v -> v.f0) // Logically partition the stream for each word
.window(TumblingProcessingTimeWindows.of(Time.minutes(1)))
.sum(1) // Count the appearances by ticker per partition
.map(value -> value.f0 + " count: " + value.f1.toString() + "\n")
.addSink(createS3SinkFromStaticConfig());
In this section, you modify the application code to write output to your Amazon S3 bucket.
Update the following line with your user name to specify the application's output location:
private static final String s3SinkPath = "s3a://ka-app-code-<username>
/data";
Compile the application code
To compile the application, do the following:
Compiling the application creates the application JAR file (target/aws-kinesis-analytics-java-apps-1.0.jar
).
The provided source code relies on libraries from Java 11.
Upload the Apache Flink streaming Java codeIn this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, navigate to the code folder, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
NoteWhen you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
Leave the version as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data stream.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID. Replace <username> with your user name.
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:Abort*",
"s3:DeleteObject*",
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::ka-app-code-<username>",
"arn:aws:s3:::ka-app-code-<username>/*"
]
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:*"
]
},
{
"Sid": "ListCloudwatchLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:%LOG_GROUP_PLACEHOLDER%:log-stream:*"
]
},
{
"Sid": "PutCloudwatchLogs",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:region:account-id:log-group:%LOG_GROUP_PLACEHOLDER%:log-stream:%LOG_STREAM_PLACEHOLDER%"
]
}
,
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter code/aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Run the applicationOn the MyApplication page, choose Run. Leave the Run without snapshot option selected, and confirm the action.
When the application is running, refresh the page. The console shows the Application graph.
In the Amazon S3 console, open the data folder in your S3 bucket.
After a few minutes, objects containing aggregated data from the application will appear.
NoteAggregration is enabled by default in Flink. To disable it, use the following:
sink.producer.aggregation-enabled' = 'false'
Optional: Customize the source and sink
In this section, you customize settings on the source and sink objects.
NoteAfter changing the code sections described in the sections following, do the following to reload the application code:
Repeat the steps in the Compile the application code section to compile the updated application code.
Repeat the steps in the Upload the Apache Flink streaming Java code section to upload the updated application code.
On the application's page in the console, choose Configure and then choose Update to reload the updated application code into your application.
In this section, you configure the names of the folders that the streaming file sink creates in the S3 bucket. You do this by adding a bucket assigner to the streaming file sink.
To customize the folder names created in the S3 bucket, do the following:
Add the following import statements to the beginning of the S3StreamingSinkJob.java
file:
import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy;
import org.apache.flink.streaming.api.functions.sink.filesystem.bucketassigners.DateTimeBucketAssigner;
Update the createS3SinkFromStaticConfig()
method in the code to look like the following:
private static StreamingFileSink<String> createS3SinkFromStaticConfig() {
final StreamingFileSink<String> sink = StreamingFileSink
.forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder<String>("UTF-8"))
.withBucketAssigner(new DateTimeBucketAssigner("yyyy-MM-dd--HH"))
.withRollingPolicy(DefaultRollingPolicy.create().build())
.build();
return sink;
}
The preceding code example uses the DateTimeBucketAssigner
with a custom date format to create folders in the S3 bucket. The DateTimeBucketAssigner
uses the current system time to create bucket names. If you want to create a custom bucket assigner to further customize the created folder names, you can create a class that implements BucketAssigner. You implement your custom logic by using the getBucketId
method.
A custom implementation of BucketAssigner
can use the Context parameter to obtain more information about a record in order to determine its destination folder.
In this section, you configure the frequency of reads on the source stream.
The Kinesis Streams consumer reads from the source stream five times per second by default. This frequency will cause issues if there is more than one client reading from the stream, or if the application needs to retry reading a record. You can avoid these issues by setting the read frequency of the consumer.
To set the read frequency of the Kinesis consumer, you set the SHARD_GETRECORDS_INTERVAL_MILLIS
setting.
The following code example sets the SHARD_GETRECORDS_INTERVAL_MILLIS
setting to one second:
kinesisConsumerConfig.setProperty(ConsumerConfigConstants.SHARD_GETRECORDS_INTERVAL_MILLIS, "1000");
Configure write buffering
In this section, you configure the write frequency and other settings of the sink.
By default, the application writes to the destination bucket every minute. You can change this interval and other settings by configuring the DefaultRollingPolicy
object.
The Apache Flink streaming file sink writes to its output bucket every time the application creates a checkpoint. The application creates a checkpoint every minute by default. To increase the write interval of the S3 sink, you must also increase the checkpoint interval.
To configure the DefaultRollingPolicy
object, do the following:
Increase the application's CheckpointInterval
setting. The following input for the UpdateApplication action sets the checkpoint interval to 10 minutes:
{
"ApplicationConfigurationUpdate": {
"FlinkApplicationConfigurationUpdate": {
"CheckpointConfigurationUpdate": {
"ConfigurationTypeUpdate" : "CUSTOM",
"CheckpointIntervalUpdate": 600000
}
}
},
"ApplicationName": "MyApplication",
"CurrentApplicationVersionId": 5
}
To use the preceding code, specify the current application version. You can retrieve the application version by using the ListApplications action.
Add the following import statement to the beginning of the S3StreamingSinkJob.java
file:
import java.util.concurrent.TimeUnit;
Update the createS3SinkFromStaticConfig
method in the S3StreamingSinkJob.java
file to look like the following:
private static StreamingFileSink<String> createS3SinkFromStaticConfig() {
final StreamingFileSink<String> sink = StreamingFileSink
.forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder<String>("UTF-8"))
.withBucketAssigner(new DateTimeBucketAssigner("yyyy-MM-dd--HH"))
.withRollingPolicy(
DefaultRollingPolicy.create()
.withRolloverInterval(TimeUnit.MINUTES.toMillis(8))
.withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
.withMaxPartSize(1024 * 1024 * 1024)
.build())
.build();
return sink;
}
The preceding code example sets the frequency of writes to the Amazon S3 bucket to 8 minutes.
For more information about configuring the Apache Flink streaming file sink, see Row-encoded Formats in the Apache Flink documentation.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources that you created in the Amazon S3 tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Managed Service for Apache Flink panel, choose MyApplication.
On the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
On the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
On the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
On the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
On the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
The following tutorial demonstrates how to create an Amazon VPC with an Amazon MSK cluster and two topics, and how to create a Managed Service for Apache Flink application that reads from one Amazon MSK topic and writes to another.
Create an Amazon VPC with an Amazon MSK clusterTo create a sample VPC and Amazon MSK cluster to access from a Managed Service for Apache Flink application, follow the Getting Started Using Amazon MSK tutorial.
When completing the tutorial, note the following:
In Step 3: Create a Topic, repeat the kafka-topics.sh --create
command to create a destination topic named AWSKafkaTutorialTopicDestination
:
bin/kafka-topics.sh --create --zookeeper ZooKeeperConnectionString
--replication-factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestination
Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn
with the ARN of your MSK cluster):
aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn
{...
"BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094"
}
When following the steps in the tutorials, be sure to use your selected AWS Region in your code, commands, and console entries.
In this section, you'll download and compile the application JAR file. We recommend using Java 11.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
The application code is located in the amazon-kinesis-data-analytics-java-examples/KafkaConnectors/KafkaGettingStartedJob.java
file. You can examine the code to familiarize yourself with the structure of Managed Service for Apache Flink application code.
Use either the command-line Maven tool or your preferred development environment to create the JAR file. To compile the JAR file using the command-line Maven tool, enter the following:
mvn package -Dflink.version=1.15.3
If the build is successful, the following file is created:
target/KafkaGettingStartedJob-1.0.jar
Note
The provided source code relies on libraries from Java 11. If you are using a development environment,
In this section, you upload your application code to the Amazon S3 bucket you created in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the KafkaGettingStartedJob-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink.
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink version 1.15.2.
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter KafkaGettingStartedJob-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
When you specify application resources using the console (such as CloudWatch Logs or an Amazon VPC), the console modifies your application execution role to grant permission to access those resources.
Under Properties, choose Add Group. Enter the following properties:
Group ID Key ValueKafkaSource
topic AWSKafkaTutorialTopic KafkaSource
bootstrap.servers The bootstrap server list you saved previously
KafkaSource
security.protocol SSL KafkaSource
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSource
ssl.truststore.password changeit
Note
The ssl.truststore.password for the default certificate is "changeit"; you do not need to change this value if you are using the default certificate.
Choose Add Group again. Enter the following properties:
Group ID Key ValueKafkaSink
topic AWSKafkaTutorialTopicDestination KafkaSink
bootstrap.servers The bootstrap server list you saved previously
KafkaSink
security.protocol SSL KafkaSink
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSink
ssl.truststore.password changeit KafkaSink
transaction.timeout.ms 1000
The application code reads the above application properties to configure the source and sink used to interact with your VPC and Amazon MSK cluster. For more information about using properties, see Use runtime properties.
Under Snapshots, choose Disable. This will make it easier to update the application without loading invalid application state data.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, choose the Enable check box.
In the Virtual Private Cloud (VPC) section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application.
Run the applicationThe Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Test the applicationIn this section, you write records to the source topic. The application reads records from the source topic and writes them to the destination topic. You verify the application is working by writing records to the source topic and reading records from the destination topic.
To write and read records from the topics, follow the steps in Step 6: Produce and Consume Data in the Getting Started Using Amazon MSK tutorial.
To read from the destination topic, use the destination topic name instead of the source topic in your second connection to the cluster:
bin/kafka-console-consumer.sh --bootstrap-server BootstrapBrokerString
--consumer.config client.properties --topic AWSKafkaTutorialTopicDestination --from-beginning
If no records appear in the destination topic, see the Cannot access resources in a VPC section in the Troubleshoot Managed Service for Apache Flink topic.
Example: Use an EFO consumer with a Kinesis data streamIn this exercise, you create a Managed Service for Apache Flink application that reads from a Kinesis data stream using an Enhanced Fan-Out (EFO) consumer. If a Kinesis consumer uses EFO, the Kinesis Data Streams service gives it its own dedicated bandwidth, rather than having the consumer share the fixed bandwidth of the stream with the other consumers reading from the stream.
For more information about using EFO with the Kinesis consumer, see FLIP-128: Enhanced Fan Out for Kinesis Consumers.
The application you create in this example uses AWS Kinesis connector (flink-connector-kinesis) 1.15.3.
Create dependent resourcesBefore you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
Two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
)
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream
and ExampleOutputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/EfoConsumer
directory.
The application code is located in the EfoApplication.java
file. Note the following about the application code:
You enable the EFO consumer by setting the following parameters on the Kinesis consumer:
RECORD_PUBLISHER_TYPE: Set this parameter to EFO for your application to use an EFO consumer to access the Kinesis Data Stream data.
EFO_CONSUMER_NAME: Set this parameter to a string value that is unique among the consumers of this stream. Re-using a consumer name in the same Kinesis Data Stream will cause the previous consumer using that name to be terminated.
The following code example demonstrates how to assign values to the consumer configuration properties to use an EFO consumer to read from the source stream:
consumerConfig.putIfAbsent(RECORD_PUBLISHER_TYPE, "EFO");
consumerConfig.putIfAbsent(EFO_CONSUMER_NAME, "basic-efo-flink-app");
To compile the application, do the following:
Compiling the application creates the application JAR file (target/aws-kinesis-analytics-java-apps-1.0.jar
).
In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the aws-kinesis-analytics-java-apps-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
These permissions grant the application the ability to access the EFO consumer.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "AllStreams",
"Effect": "Allow",
"Action": [
"kinesis:ListShards",
"kinesis:ListStreamConsumers",
"kinesis:DescribeStreamSummary"
],
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/*"
},
{
"Sid": "Stream",
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:RegisterStreamConsumer",
"kinesis:DeregisterStreamConsumer"
],
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
},
{
"Sid": "Consumer",
"Effect": "Allow",
"Action": [
"kinesis:DescribeStreamConsumer",
"kinesis:SubscribeToShard"
],
"Resource": [
"arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream/consumer/my-efo-flink-app",
"arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream/consumer/my-efo-flink-app:*"
]
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter aws-kinesis-analytics-java-apps-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Create Group.
Enter the following application properties and values:
Group ID Key ValueConsumerConfigProperties
flink.stream.recordpublisher
EFO
ConsumerConfigProperties
flink.stream.efo.consumername
basic-efo-flink-app
ConsumerConfigProperties
INPUT_STREAM
ExampleInputStream
ConsumerConfigProperties
flink.inputstream.initpos
LATEST
ConsumerConfigProperties
AWS_REGION
us-west-2
Under Properties, choose Create Group.
Enter the following application properties and values:
Group ID Key ValueProducerConfigProperties
OUTPUT_STREAM
ExampleOutputStream
ProducerConfigProperties
AWS_REGION
us-west-2
ProducerConfigProperties
AggregationEnabled
false
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Run the applicationThe Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
You can also check the Kinesis Data Streams console, in the data stream's Enhanced fan-out tab, for the name of your consumer (basic-efo-flink-app).
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the efo Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
In this exercise, you create a Managed Service for Apache Flink application that has a Kinesis data stream as a source and a Firehose stream as a sink. Using the sink, you can verify the output of the application in an Amazon S3 bucket.
Create dependent resourcesBefore you create a Managed Service for Apache Flink for this exercise, you create the following dependent resources:
A Kinesis data stream (ExampleInputStream
)
A Firehose stream that the application writes output to (ExampleDeliveryStream
).
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis stream, Amazon S3 buckets, and Firehose stream using the console. For instructions for creating these resources, see the following topics:
Write sample records to the input streamIn this section, you use a Python script to write sample records to the stream for the application to process.
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/FirehoseSink
directory.
The application code is located in the FirehoseSinkStreamingJob.java
file. Note the following about the application code:
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName,
new SimpleStringSchema(), inputProperties));
The application uses a Firehose sink to write data to a Firehose stream. The following snippet creates the Firehose sink:
private static KinesisFirehoseSink<String> createFirehoseSinkFromStaticConfig() {
Properties sinkProperties = new Properties();
sinkProperties.setProperty(AWS_REGION, region);
return KinesisFirehoseSink.<String>builder()
.setFirehoseClientProperties(sinkProperties)
.setSerializationSchema(new SimpleStringSchema())
.setDeliveryStreamName(outputDeliveryStreamName)
.build();
}
To compile the application, do the following:
Compiling the application creates the application JAR file (target/aws-kinesis-analytics-java-apps-1.0.jar
).
In this section, you upload your application code to the Amazon S3 bucket that you created in the Create dependent resources section.
To upload the application codeOpen the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the console, choose the ka-app-code-<username>
bucket, and then choose Upload.
In the Select files step, choose Add files. Navigate to the java-getting-started-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationYou can create and run a Managed Service for Apache Flink application using either the console or the AWS CLI.
NoteWhen you create the application using the console, your AWS Identity and Access Management (IAM) and Amazon CloudWatch Logs resources are created for you. When you create the application using the AWS CLI, you create these resources separately.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create the application using the console, you have the option of having an IAM role and policy created for your application. The application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Edit the IAM policy to add permissions to access the Kinesis data stream and Firehose stream.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace all the instances of the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/java-getting-started-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteDeliveryStream",
"Effect": "Allow",
"Action": "firehose:*",
"Resource": "arn:aws:firehose:us-west-2:012345678901
:deliverystream/ExampleDeliveryStream"
}
]
}
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter java-getting-started-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationOn the MyApplication page, choose Stop. Confirm the action.
Update the applicationUsing the console, you can update application settings such as application properties, monitoring settings, and the location or file name of the application JAR.
On the MyApplication page, choose Configure. Update the application settings and choose Update.
NoteTo update the application's code on the console, you must either change the object name of the JAR, use a different S3 bucket, or use the AWS CLI as described in the Update the application code section. If the file name or the bucket does not change, the application code is not reloaded when you choose Update on the Configure page.
Create and run the application (AWS CLI)In this section, you use the AWS CLI to create and run the Managed Service for Apache Flink application.
Create a permissions policyFirst, you create a permissions policy with two statements: one that grants permissions for the read
action on the source stream, and another that grants permissions for write
actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace username
with the user name that you will use to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (
) with your account ID.012345678901
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": ["arn:aws:s3:::ka-app-code-username
",
"arn:aws:s3:::ka-app-code-username
/*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteDeliveryStream",
"Effect": "Allow",
"Action": "firehose:*",
"Resource": "arn:aws:firehose:us-west-2:012345678901
:deliverystream/ExampleDeliveryStream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
NoteTo access other Amazon services, you can use the AWS SDK for Java. Managed Service for Apache Flink automatically sets the credentials required by the SDK to those of the service execution IAM role that is associated with your application. No additional steps are needed.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream if it doesn't have permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role. The permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create Role.
Under Select type of trusted identity, choose AWS Service. Under Choose the service that will use this role, choose Kinesis. Under Select your use case, choose Kinesis Analytics.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role.
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a permissions policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream policy, and choose Attach policy.
You now have created the service execution role that your application will use to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the Managed Service for Apache Flink applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix with the suffix that you chose in the Create dependent resources section (ka-app-code-
.) Replace the sample account ID (<username>
012345678901
) in the service execution role with your account ID.
{
"ApplicationName": "test",
"ApplicationDescription": "my java test app",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901
:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "java-getting-started-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
}
}
}
}
Execute the CreateApplication
action with the preceding request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication
action to start the application.
Save the following JSON code to a file named start_request.json
.
{
"ApplicationName": "test",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication
action to stop the application.
Save the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "test"
}
Execute the StopApplication
action with the following request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Set up application logging in Managed Service for Apache Flink.
Update the application codeWhen you need to update your application code with a new version of your code package, you use the UpdateApplication
AWS CLI action.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username
>) with the suffix you chose in the Create dependent resources section.
{
"ApplicationName": "test",
"CurrentApplicationVersionId": 1
,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "java-getting-started-1.0.jar"
}
}
}
}
}
Clean up AWS resources
This section includes procedures for cleaning up AWS resources created in the Getting Started tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
In the Managed Service for Apache Flink panel, choose MyApplication.
Choose Configure.
In the Snapshots section, choose Disable and then choose Update.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Firehose panel, choose ExampleDeliveryStream.
In the ExampleDeliveryStream page, choose Delete Firehose stream and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
If you created an Amazon S3 bucket for your Firehose stream's destination, delete that bucket too.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
If you created a new policy for your Firehose stream, delete that policy too.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
If you created a new role for your Firehose stream, delete that role too.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
This example demonstrates how to create an Managed Service for Apache Flink application that reads data from a Kinesis stream in a different account. In this example, you will use one account for the source Kinesis stream, and a second account for the Managed Service for Apache Flink application and sink Kinesis stream.
PrerequisitesIn this tutorial, you modify the Getting Started example to read data from a Kinesis stream in a different account. Complete the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial before proceeding.
You need two AWS accounts to complete this tutorial: one for the source stream, and one for the application and the sink stream. Use the AWS account you used for the Getting Started tutorial for the application and sink stream. Use a different AWS account for the source stream.
You will access your two AWS accounts by using named profiles. Modify your AWS credentials and configuration files to include two profiles that contain the region and connection information for your two accounts.
The following example credential file contains two named profiles, ka-source-stream-account-profile
and ka-sink-stream-account-profile
. Use the account you used for the Getting Started tutorial for the sink stream account.
[ka-source-stream-account-profile]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[ka-sink-stream-account-profile]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
The following example configuration file contains the same named profiles with region and output format information.
[profile ka-source-stream-account-profile]
region=us-west-2
output=json
[profile ka-sink-stream-account-profile]
region=us-west-2
output=json
Note
This tutorial does not use the ka-sink-stream-account-profile
. It is included as an example of how to access two different AWS accounts using profiles.
For more information on using named profiles with the AWS CLI, see Named Profiles in the AWS Command Line Interface documentation.
Create source Kinesis streamIn this section, you will create the Kinesis stream in the source account.
Enter the following command to create the Kinesis stream that the application will use for input. Note that the --profile
parameter specifies which account profile to use.
$ aws kinesis create-stream \
--stream-name SourceAccountExampleInputStream \
--shard-count 1 \
--profile ka-source-stream-account-profile
Create and update IAM roles and policies
To allow object access across AWS accounts, you create an IAM role and policy in the source account. Then, you modify the IAM policy in the sink account. For information about creating IAM roles and policies, see the following topics in the AWS Identity and Access Management User Guide:
Sink account roles and policiesEdit the kinesis-analytics-service-MyApplication-us-west-2
policy from the Getting Started tutorial. This policy allows the role in the source account to be assumed in order to read the source stream.
When you use the console to create your application, the console creates a policy called kinesis-analytics-service-
, and a role called <application name>
-<application region>
kinesisanalytics-
.<application name>
-<application region>
Add the highlighted section below to the policy. Replace the sample account ID (SOURCE01234567
) with the ID of the account you will use for the source stream.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRoleInSourceAccount",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012
:role/KA-Source-Stream-Role
"
},
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/aws-kinesis-analytics-java-apps-1.0.jar"
]
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:log-group:*"
]
},
{
"Sid": "ListCloudwatchLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutCloudwatchLogs",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
}
]
}
Open the kinesis-analytics-MyApplication-us-west-2
role, and make a note of its Amazon Resource Name (ARN). You will need it in the next section. The role ARN looks like the following.
arn:aws:iam::SINK012345678
:role/service-role/kinesis-analytics-MyApplication-us-west-2
Create a policy in the source account called KA-Source-Stream-Policy
. Use the following JSON for the policy. Replace the sample account number with the account number of the source account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:ListShards"
],
"Resource": "arn:aws:kinesis:us-west-2:111122223333
:stream/SourceAccountExampleInputStream"
}
]
}
Create a role in the source account called MF-Source-Stream-Role
. Do the following to create the role using the Managed Flink use case:
In the IAM Management Console, choose Create Role.
On the Create Role page, choose AWS Service. In the service list, choose Kinesis.
In the Select your use case section, choose Managed Service for Apache Flink.
Choose Next: Permissions.
Add the KA-Source-Stream-Policy
permissions policy you created in the previous step. Choose Next:Tags.
Choose Next: Review.
Name the role KA-Source-Stream-Role
. Your application will use this role to access the source stream.
Add the kinesis-analytics-MyApplication-us-west-2
ARN from the sink account to the trust relationship of the KA-Source-Stream-Role
role in the source account:
Open the KA-Source-Stream-Role
in the IAM console.
Choose the Trust Relationships tab.
Choose Edit trust relationship.
Use the following code for the trust relationship. Replace the sample account ID (
) with your sink account ID.SINK012345678
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333
:role/service-role/kinesis-analytics-MyApplication-us-west-2"
},
"Action": "sts:AssumeRole"
}
]
}
In this section, you update the Python script that generates sample data to use the source account profile.
Update the stock.py
script with the following highlighted changes.
import json
import boto3
import random
import datetime
import os
os.environ['AWS_PROFILE'] ='ka-source-stream-account-profile'
os.environ['AWS_DEFAULT_REGION'] = 'us-west-2'
kinesis = boto3.client('kinesis')
def getReferrer():
data = {}
now = datetime.datetime.now()
str_now = now.isoformat()
data['event_time'] = str_now
data['ticker'] = random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV'])
price = random.random() * 100
data['price'] = round(price, 2)
return data
while True:
data = json.dumps(getReferrer())
print(data)
kinesis.put_record(
StreamName="SourceAccountExampleInputStream
",
Data=data,
PartitionKey="partitionkey")
Update the Java application
In this section, you update the Java application code to assume the source account role when reading from the source stream.
Make the following changes to the BasicStreamingJob.java
file. Replace the example source account number (SOURCE01234567
) with your source account number.
package com.amazonaws.services.managed-flink;
import com.amazonaws.services.managed-flink.runtime.KinesisAnalyticsRuntime;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer;
import org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer;
import org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants;
import org.apache.flink.streaming.connectors.kinesis.config.AWSConfigConstants;
import java.io.IOException;
import java.util.Map;
import java.util.Properties;
/**
* A basic Managed Service for Apache Flink for Java application with Kinesis data streams
* as source and sink.
*/
public class BasicStreamingJob {
private static final String region = "us-west-2";
private static final String inputStreamName = "SourceAccountExampleInputStream";
private static final String outputStreamName = ExampleOutputStream;
private static final String roleArn = "arn:aws:iam::SOURCE01234567:role/KA-Source-Stream-Role";
private static final String roleSessionName = "ksassumedrolesession";
private static DataStream<String> createSourceFromStaticConfig(StreamExecutionEnvironment env) {
Properties inputProperties = new Properties();
inputProperties.setProperty(AWSConfigConstants.AWS_CREDENTIALS_PROVIDER, "ASSUME_ROLE");
inputProperties.setProperty(AWSConfigConstants.AWS_ROLE_ARN, roleArn);
inputProperties.setProperty(AWSConfigConstants.AWS_ROLE_SESSION_NAME, roleSessionName);
inputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region);
inputProperties.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties));
}
private static KinesisStreamsSink<String> createSinkFromStaticConfig() {
Properties outputProperties = new Properties();
outputProperties.setProperty(AWSConfigConstants.AWS_REGION, region);
return KinesisStreamsSink.<String>builder()
.setKinesisClientProperties(outputProperties)
.setSerializationSchema(new SimpleStringSchema())
.setStreamName(outputProperties.getProperty("OUTPUT_STREAM", "ExampleOutputStream"))
.setPartitionKeyGenerator(element -> String.valueOf(element.hashCode()))
.build();
}
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> input = createSourceFromStaticConfig(env);
input.addSink(createSinkFromStaticConfig());
env.execute("Flink Streaming Java API Skeleton");
}
}
Build, upload, and run the application
Do the following to update and run the application:
Build the application again by running the following command in the directory with the pom.xml
file.
mvn package -Dflink.version=1.15.3
Delete the previous JAR file from your Amazon Simple Storage Service (Amazon S3) bucket, and then upload the new aws-kinesis-analytics-java-apps-1.0.jar
file to the S3 bucket.
In the application's page in the Managed Service for Apache Flink console, choose Configure, Update to reload the application JAR file.
Run the stock.py
script to send data to the source stream.
python stock.py
The application now reads data from the Kinesis stream in the other account.
You can verify that the application is working by checking the PutRecords.Bytes
metric of the ExampleOutputStream
stream. If there is activity in the output stream, the application is functioning properly.
If you are using the current data source APIs, your application can leverage the Amazon MSK Config Providers utility described here. This allows your KafkaSource function to access your keystore and truststore for mutual TLS in Amazon S3.
...
// define names of config providers:
builder.setProperty("config.providers", "secretsmanager,s3import");
// provide implementation classes for each provider:
builder.setProperty("config.providers.secretsmanager.class", "com.amazonaws.kafka.config.providers.SecretsManagerConfigProvider");
builder.setProperty("config.providers.s3import.class", "com.amazonaws.kafka.config.providers.S3ImportConfigProvider");
String region = appProperties.get(Helpers.S3_BUCKET_REGION_KEY).toString();
String keystoreS3Bucket = appProperties.get(Helpers.KEYSTORE_S3_BUCKET_KEY).toString();
String keystoreS3Path = appProperties.get(Helpers.KEYSTORE_S3_PATH_KEY).toString();
String truststoreS3Bucket = appProperties.get(Helpers.TRUSTSTORE_S3_BUCKET_KEY).toString();
String truststoreS3Path = appProperties.get(Helpers.TRUSTSTORE_S3_PATH_KEY).toString();
String keystorePassSecret = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_KEY).toString();
String keystorePassSecretField = appProperties.get(Helpers.KEYSTORE_PASS_SECRET_FIELD_KEY).toString();
// region, etc..
builder.setProperty("config.providers.s3import.param.region", region);
// properties
builder.setProperty("ssl.truststore.location", "${s3import:" + region + ":" + truststoreS3Bucket + "/" + truststoreS3Path + "}");
builder.setProperty("ssl.keystore.type", "PKCS12");
builder.setProperty("ssl.keystore.location", "${s3import:" + region + ":" + keystoreS3Bucket + "/" + keystoreS3Path + "}");
builder.setProperty("ssl.keystore.password", "${secretsmanager:" + keystorePassSecret + ":" + keystorePassSecretField + "}");
builder.setProperty("ssl.key.password", "${secretsmanager:" + keystorePassSecret + ":" + keystorePassSecretField + "}");
...
More details and a walkthrough can be found here.
Legacy SourceFunction APIsIf you are using the legacy SourceFunction APIs, your application will use custom serialization and deserialization schemas that override the open
method to load the custom truststore. This makes the truststore available to the application after the application restarts or replaces threads.
The custom truststore is retrieved and stored using the following code:
public static void initializeKafkaTruststore() {
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
URL inputUrl = classLoader.getResource("kafka.client.truststore.jks");
File dest = new File("/tmp/kafka.client.truststore.jks");
try {
FileUtils.copyURLToFile(inputUrl, dest);
} catch (Exception ex) {
throw new FlinkRuntimeException("Failed to initialize Kakfa truststore", ex);
}
}
Note
Apache Flink requires the truststore to be in JKS format.
The following tutorial demonstrates how to securely connect (encryption in transit) to a Kafka Cluster that uses server certificates issued by a custom, private or even self-hosted Certificate Authority (CA).
For connecting any Kafka Client securely over TLS to a Kafka Cluster, the Kafka Client (like the example Flink application) must trust the complete chain of trust presented by the Kafka Cluster's server certificates (from the Issuing CA up to the Root-Level CA). As an example for a custom truststore, we will use an Amazon MSK cluster with Mutual TLS (MTLS) Authentication enabled. This implies that the MSK cluster nodes use server certificates that are issued by an AWS Certificate Manager Private Certificate Authority (ACM Private CA) that is private to your account and Region and therefore not trusted by the default truststore of the Java Virtual Machine (JVM) executing the Flink application.
NoteA keystore is used to store private key and identity certificates an application should present to both server or client for verification.
A truststore is used to store certificates from Certified Authorities (CA) that verify the certificate presented by the server in an SSL connection.
You can also use the technique in this tutorial for interactions between a Managed Service for Apache Flink application and other Apache Kafka sources, such as:
Create a VPC with an Amazon MSK clusterTo create a sample VPC and Amazon MSK cluster to access from a Managed Service for Apache Flink application, follow the Getting Started Using Amazon MSK tutorial.
When completing the tutorial, also do the following:
In Step 3: Create a Topic, repeat the kafka-topics.sh --create
command to create a destination topic named AWSKafkaTutorialTopicDestination
:
bin/kafka-topics.sh --create --bootstrap-server ZooKeeperConnectionString
--replication-factor 3 --partitions 1 --topic AWSKafkaTutorialTopicDestination
Note
If the kafka-topics.sh
command returns a ZooKeeperClientTimeoutException
, verify that the Kafka cluster's security group has an inbound rule to allow all traffic from the client instance's private IP address.
Record the bootstrap server list for your cluster. You can get the list of bootstrap servers with the following command (replace ClusterArn
with the ARN of your MSK cluster):
aws kafka get-bootstrap-brokers --region us-west-2 --cluster-arn ClusterArn
{...
"BootstrapBrokerStringTls": "b-2.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-1.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094,b-3.awskafkatutorialcluste.t79r6y.c4.kafka.us-west-2.amazonaws.com:9094"
}
When following the steps in this tutorial and the prerequisite tutorials, be sure to use your selected AWS Region in your code, commands, and console entries.
In this section, you create a custom certificate authority (CA), use it to generate a custom truststore, and apply it to your MSK cluster.
To create and apply your custom truststore, follow the Client Authentication tutorial in the Amazon Managed Streaming for Apache Kafka Developer Guide.
Create the application codeIn this section, you download and compile the application JAR file.
The Java application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
The application code is located in the amazon-kinesis-data-analytics-java-examples/CustomKeystore
. You can examine the code to familiarize yourself with the structure of Managed Service for Apache Flink code.
Use either the command line Maven tool or your preferred development environment to create the JAR file. To compile the JAR file using the command line Maven tool, enter the following:
mvn package -Dflink.version=1.15.3
If the build is successful, the following file is created:
target/flink-app-1.0-SNAPSHOT.jar
Note
The provided source code relies on libraries from Java 11.
In this section, you upload your application code to the Amazon S3 bucket that you created in the Tutorial: Get started using the DataStream API in Managed Service for Apache Flink tutorial.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the flink-app-1.0-SNAPSHOT.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink version 1.15.2.
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter flink-app-1.0-SNAPSHOT.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
When you specify application resources using the console (such as logs or a VPC), the console modifies your application execution role to grant permission to access those resources.
Under Properties, choose Add Group. Enter the following properties:
Group ID Key ValueKafkaSource
topic AWSKafkaTutorialTopic KafkaSource
bootstrap.servers The bootstrap server list you saved previously
KafkaSource
security.protocol SSL KafkaSource
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSource
ssl.truststore.password changeit
Note
The ssl.truststore.password for the default certificate is "changeit"—you don't need to change this value if you're using the default certificate.
Choose Add Group again. Enter the following properties:
Group ID Key ValueKafkaSink
topic AWSKafkaTutorialTopicDestination KafkaSink
bootstrap.servers The bootstrap server list you saved previously
KafkaSink
security.protocol SSL KafkaSink
ssl.truststore.location /usr/lib/jvm/java-11-amazon-corretto/lib/security/cacerts KafkaSink
ssl.truststore.password changeit KafkaSink
transaction.timeout.ms 1000
The application code reads the above application properties to configure the source and sink used to interact with your VPC and Amazon MSK cluster. For more information about using properties, see Use runtime properties.
Under Snapshots, choose Disable. This will make it easier to update the application without loading invalid application state data.
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, choose the Enable check box.
In the Virtual Private Cloud (VPC) section, choose the VPC to associate with your application. Choose the subnets and security group associated with your VPC that you want the application to use to access VPC resources.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application.
Run the applicationThe Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Test the applicationIn this section, you write records to the source topic. The application reads records from the source topic and writes them to the destination topic. You verify that the application is working by writing records to the source topic and reading records from the destination topic.
To write and read records from the topics, follow the steps in Step 6: Produce and Consume Data in the Getting Started Using Amazon MSK tutorial.
To read from the destination topic, use the destination topic name instead of the source topic in your second connection to the cluster:
bin/kafka-console-consumer.sh --bootstrap-server BootstrapBrokerString
--consumer.config client.properties --topic AWSKafkaTutorialTopicDestination --from-beginning
If no records appear in the destination topic, see the Cannot access resources in a VPC section in the Troubleshoot Managed Service for Apache Flink topic.
Python examplesThe following examples demonstrate how to create applications using Python with the Apache Flink Table API.
Example: Creating a tumbling window in PythonIn this exercise, you create a Python Managed Service for Apache Flink application that aggregates data using a tumbling window.
Create dependent resourcesBefore you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
Two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
)
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream
and ExampleOutputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
NoteThe Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following:
aws configure
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/python/TumblingWindow
directory.
The application code is located in the tumbling-windows.py
file. Note the following about the application code:
The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_table
function to create the Kinesis table source:
table_env.execute_sql(
create_input_table(input_table_name, input_stream, input_region, stream_initpos)
)
The create_table
function uses a SQL command to create a table that is backed by the streaming source:
def create_input_table(table_name, stream_name, region, stream_initpos):
return """ CREATE TABLE {0} (
ticker VARCHAR(6),
price DOUBLE,
event_time TIMESTAMP(3),
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
)
PARTITIONED BY (ticker)
WITH (
'connector' = 'kinesis',
'stream' = '{1}',
'aws.region' = '{2}',
'scan.stream.initpos' = '{3}',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601'
) """.format(table_name, stream_name, region, stream_initpos)
The application uses the Tumble
operator to aggregate records within a specified tumbling window, and return the aggregated records as a table object:
tumbling_window_table = (
input_table.window(
Tumble.over("10.seconds").on("event_time").alias("ten_second_window")
)
.group_by("ticker, ten_second_window")
.select("ticker, price.min as price, to_string(ten_second_window.end) as event_time")
The application uses the Kinesis Flink connector, from the flink-sql-connector-kinesis-1.15.2.jar
.
In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
Use your preferred compression application to compress the tumbling-windows.py
and flink-sql-connector-kinesis-1.15.2.jar
files. Name the archive myapp.zip
.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the myapp.zip
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter myapp.zip
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following:
Group ID Key Valueconsumer.config.0
input.stream.name
ExampleInputStream
consumer.config.0
aws.region
us-west-2
consumer.config.0
scan.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group again.
Enter the following:
Group ID Key Valueproducer.config.0
output.stream.name
ExampleOutputStream
producer.config.0
aws.region
us-west-2
producer.config.0
shard.count
1
Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options
. This special property group tells your application where to find its code resources. For more information, see Specify your code files.
Enter the following:
Group ID Key Valuekinesis.analytics.flink.run.options
python
tumbling-windows.py
kinesis.analytics.flink.run.options
jarfile
flink-sql-connector-kinesis-1.15.2.jar
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Edit the IAM policyEdit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/myapp.zip"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Before you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
Two Kinesis data streams (ExampleInputStream
and ExampleOutputStream
)
An Amazon S3 bucket to store the application's code (ka-app-code-
)<username>
You can create the Kinesis streams and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data streams ExampleInputStream
and ExampleOutputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
NoteThe Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following:
aws configure
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/>amazon-kinesis-data-analytics-java-examples
Navigate to the amazon-kinesis-data-analytics-java-examples/python/SlidingWindow
directory.
The application code is located in the sliding-windows.py
file. Note the following about the application code:
The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_input_table
function to create the Kinesis table source:
table_env.execute_sql(
create_input_table(input_table_name, input_stream, input_region, stream_initpos)
)
The create_input_table
function uses a SQL command to create a table that is backed by the streaming source:
def create_input_table(table_name, stream_name, region, stream_initpos):
return """ CREATE TABLE {0} (
ticker VARCHAR(6),
price DOUBLE,
event_time TIMESTAMP(3),
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
)
PARTITIONED BY (ticker)
WITH (
'connector' = 'kinesis',
'stream' = '{1}',
'aws.region' = '{2}',
'scan.stream.initpos' = '{3}',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601'
) """.format(table_name, stream_name, region, stream_initpos)
}
The application uses the Slide
operator to aggregate records within a specified sliding window, and return the aggregated records as a table object:
sliding_window_table = (
input_table
.window(
Slide.over("10.seconds")
.every("5.seconds")
.on("event_time")
.alias("ten_second_window")
)
.group_by("ticker, ten_second_window")
.select("ticker, price.min as price, to_string(ten_second_window.end) as event_time")
)
The application uses the Kinesis Flink connector, from the flink-sql-connector-kinesis-1.15.2.jar file.
In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
This section describes how to package your Python application.
Use your preferred compression application to compress the sliding-windows.py
and flink-sql-connector-kinesis-1.15.2.jar
files. Name the archive myapp.zip
.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the myapp.zip
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter myapp.zip
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following application properties and values:
Group ID Key Valueconsumer.config.0
input.stream.name
ExampleInputStream
consumer.config.0
aws.region
us-west-2
consumer.config.0
scan.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group again.
Enter the following application properties and values:
Group ID Key Valueproducer.config.0
output.stream.name
ExampleOutputStream
producer.config.0
aws.region
us-west-2
producer.config.0
shard.count
1
Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options
. This special property group tells your application where to find its code resources. For more information, see Specify your code files.
Enter the following application properties and values:
Group ID Key Valuekinesis.analytics.flink.run.options
python
sliding-windows.py
kinesis.analytics.flink.run.options
jarfile
flink-sql-connector-kinesis_1.15.2.jar
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Edit the IAM policyEdit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/myapp.zip"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
In this exercise, you create a Python Managed Service for Apache Flink application that streams data to an Amazon Simple Storage Service sink.
Create dependent resourcesBefore you create a Managed Service for Apache Flink application for this exercise, you create the following dependent resources:
A Kinesis data stream (ExampleInputStream
)
An Amazon S3 bucket to store the application's code and output (ka-app-code-
)<username>
Managed Service for Apache Flink cannot write data to Amazon S3 with server-side encryption enabled on Managed Service for Apache Flink.
You can create the Kinesis stream and Amazon S3 bucket using the console. For instructions for creating these resources, see the following topics:
Creating and Updating Data Streams in the Amazon Kinesis Data Streams Developer Guide. Name your data stream ExampleInputStream
.
How Do I Create an S3 Bucket? in the Amazon Simple Storage Service User Guide. Give the Amazon S3 bucket a globally unique name by appending your login name, such as ka-app-code-
.<username>
In this section, you use a Python script to write sample records to the stream for the application to process.
NoteThe Python script in this section uses the AWS CLI. You must configure your AWS CLI to use your account credentials and default region. To configure your AWS CLI, enter the following:
aws configure
Create a file named stock.py
with the following contents:
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
Run the stock.py
script:
$ python stock.py
Keep the script running while completing the rest of the tutorial.
The Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/python/S3Sink
directory.
The application code is located in the streaming-file-sink.py
file. Note the following about the application code:
The application uses a Kinesis table source to read from the source stream. The following snippet calls the create_source_table
function to create the Kinesis table source:
table_env.execute_sql(
create_source_table(input_table_name, input_stream, input_region, stream_initpos)
)
The create_source_table
function uses a SQL command to create a table that is backed by the streaming source
import datetime
import json
import random
import boto3
STREAM_NAME = "ExampleInputStream"
def get_data():
return {
'event_time': datetime.datetime.now().isoformat(),
'ticker': random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV']),
'price': round(random.random() * 100, 2)}
def generate(stream_name, kinesis_client):
while True:
data = get_data()
print(data)
kinesis_client.put_record(
StreamName=stream_name,
Data=json.dumps(data),
PartitionKey="partitionkey")
if __name__ == '__main__':
generate(STREAM_NAME, boto3.client('kinesis', region_name='us-west-2'))
The application uses the filesystem
connector to send records to an Amazon S3 bucket:
def create_sink_table(table_name, bucket_name):
return """ CREATE TABLE {0} (
ticker VARCHAR(6),
price DOUBLE,
event_time VARCHAR(64)
)
PARTITIONED BY (ticker)
WITH (
'connector'='filesystem',
'path'='s3a://{1}/',
'format'='json',
'sink.partition-commit.policy.kind'='success-file',
'sink.partition-commit.delay' = '1 min'
) """.format(table_name, bucket_name)
The application uses the Kinesis Flink connector, from the flink-sql-connector-kinesis-1.15.2.jar file.
In this section, you upload your application code to the Amazon S3 bucket you created in the Create dependent resources section.
Use your preferred compression application to compress the streaming-file-sink.py
and flink-sql-connector-kinesis-1.15.2.jar files. Name the archive myapp.zip
.
In the Amazon S3 console, choose the ka-app-code-<username>
bucket, and choose Upload.
In the Select files step, choose Add files. Navigate to the myapp.zip
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the Managed Service for Apache Flink applicationFollow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Runtime, choose Apache Flink.
NoteManaged Service for Apache Flink uses Apache Flink version 1.15.2.
Leave the version pulldown as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
On the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter myapp.zip
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following application properties and values:
Group ID Key Valueconsumer.config.0
input.stream.name
ExampleInputStream
consumer.config.0
aws.region
us-west-2
consumer.config.0
scan.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group again. For Group ID, enter kinesis.analytics.flink.run.options
. This special property group tells your application where to find its code resources. For more information, see Specify your code files.
Enter the following application properties and values:
Group ID Key Valuekinesis.analytics.flink.run.options
python
streaming-file-sink.py
kinesis.analytics.flink.run.options
jarfile
S3Sink/lib/flink-sql-connector-kinesis-1.15.2.jar
Under Properties, choose Add group again. For Group ID, enter sink.config.0
. This special property group tells your application where to find its code resources. For more information, see Specify your code files.
Enter the following application properties and values: (replace bucket-name
with the actual name of your Amazon S3 bucket.)
sink.config.0
output.bucket.name
bucket-name
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, select the Enable check box.
Choose Update.
When you choose to enable CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
This log stream is used to monitor the application. This is not the same log stream that the application uses to send results.
Edit the IAM policyEdit the IAM policy to add permissions to access the Kinesis data streams.
Open the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"logs:DescribeLogGroups",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*",
"arn:aws:s3:::ka-app-code-<username>
/myapp.zip"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": "logs:DescribeLogStreams",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": "logs:PutLogEvents",
"Resource": "arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
},
{
"Sid": "ListCloudwatchLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteObjects",
"Effect": "Allow",
"Action": [
"s3:Abort*",
"s3:DeleteObject*",
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::ka-app-code-<username>",
"arn:aws:s3:::ka-app-code-<username>/*"
]
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
You can check the Managed Service for Apache Flink metrics on the CloudWatch console to verify that the application is working.
Clean up AWS resourcesThis section includes procedures for cleaning up AWS resources created in the Sliding Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
The following examples demonstrate how to create applications using Scala with Apache Flink.
Example: Creating a tumbling window in Scala NoteStarting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives.
For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen.
In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to output Kinesis stream.
Download and examine the application codeThe Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/scala/TumblingWindow
directory.
Note the following about the application code:
A build.sbt
file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.scala
file contains the main method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
private def createSource: FlinkKinesisConsumer[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val inputProperties = applicationProperties.get("ConsumerConfigProperties")
new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName),
new SimpleStringSchema, inputProperties)
}
The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink:
private def createSink: KinesisStreamsSink[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val outputProperties = applicationProperties.get("ProducerConfigProperties")
KinesisStreamsSink.builder[String]
.setKinesisClientProperties(outputProperties)
.setSerializationSchema(new SimpleStringSchema)
.setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName))
.setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode))
.build
}
The application uses the window operator to find the count of values for each stock symbol over a 5-seconds tumbling window. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink:
environment.addSource(createSource)
.map { value =>
val jsonNode = jsonParser.readValue(value, classOf[JsonNode])
new Tuple2[String, Int](jsonNode.get("ticker").toString, 1)
}
.returns(Types.TUPLE(Types.STRING, Types.INT))
.keyBy(v => v.f0) // Logically partition the stream for each ticker
.window(TumblingProcessingTimeWindows.of(Time.seconds(10)))
.sum(1) // Sum the number of tickers per partition
.map { value => value.f0 + "," + value.f1.toString + "\n" }
.sinkTo(createSink)
The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object.
The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties.
In this section, you compile and upload your application code to an Amazon S3 bucket.
Compile the Application CodeUse the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises.
To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT:
sbt assembly
If the application compiles successfully, the following file is created:
target/scala-3.2.0/tumbling-window-scala-1.0.jar
In this section, you create an Amazon S3 bucket and upload your application code.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket
Enter ka-app-code-<username>
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.
In Configure options, keep the settings as they are, and choose Next.
In Set permissions, keep the settings as they are, and choose Next.
Choose Create bucket.
Choose the ka-app-code-<username>
bucket, and then choose Upload.
In the Select files step, choose Add files. Navigate to the tumbling-window-scala-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My Scala test app
.
For Runtime, choose Apache Flink.
Leave the version as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Use the following procedure to configure the application.
To configure the applicationOn the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter tumbling-window-scala-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following:
Group ID Key ValueConsumerConfigProperties
input.stream.name
ExampleInputStream
ConsumerConfigProperties
aws.region
us-west-2
ConsumerConfigProperties
flink.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group again.
Enter the following:
Group ID Key ValueProducerConfigProperties
output.stream.name
ExampleOutputStream
ProducerConfigProperties
aws.region
us-west-2
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, choose the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
Edit the IAM policy to add permissions to access the Amazon S3 bucket.
To edit the IAM policy to add S3 bucket permissionsOpen the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/tumbling-window-scala-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationTo stop the application, on the MyApplication page, choose Stop. Confirm the action.
Create and run the application (CLI)In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications.
Create a permissions policy NoteYou must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace username
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901)
with your account ID. The MF-stream-rw-role
service execution role should be tailored to the customer-specfic role.
{
"ApplicationName": "tumbling_window",
"ApplicationDescription": "Scala tumbling window application",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role
",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "tumbling-window-scala-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
},
"CloudWatchLoggingOptions": [
{
"LogStreamARN": "arn:aws:logs:us-west-2:012345678901
:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles and then Create Role.
Under Select type of trusted identity, choose AWS Service
Under Choose the service that will use this role, choose Kinesis.
Under Select your use case, choose Managed Service for Apache Flink.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream
policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID. The ServiceExecutionRole
should include the IAM user role you created in the previous section.
"ApplicationName": "tumbling_window",
"ApplicationDescription": "Scala getting started application",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "tumbling-window-scala-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
},
"CloudWatchLoggingOptions": [
{
"LogStreamARN": "arn:aws:logs:us-west-2:012345678901
:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
}
]
}
Execute the CreateApplication with the following request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication action to start the application.
To start the applicationSave the following JSON code to a file named start_request.json
.
{
"ApplicationName": "tumbling_window",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication action to stop the application.
To stop the applicationSave the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "tumbling_window"
}
Execute the StopApplication
action with the preceding request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging.
Update environment propertiesIn this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
To update environment properties for the applicationSave the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "tumbling_window",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action.
NoteTo load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section.
{
"ApplicationName": "tumbling_window",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "tumbling-window-scala-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU"
}
}
}
}
}
Clean up AWS resources
This section includes procedures for cleaning up AWS resources created in the tumbling Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Starting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives.
For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen.
In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to output Kinesis stream.
Download and examine the application codeThe Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/scala/SlidingWindow
directory.
Note the following about the application code:
A build.sbt
file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.scala
file contains the main method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
private def createSource: FlinkKinesisConsumer[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val inputProperties = applicationProperties.get("ConsumerConfigProperties")
new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName),
new SimpleStringSchema, inputProperties)
}
The application also uses a Kinesis sink to write into the result stream. The following snippet creates the Kinesis sink:
private def createSink: KinesisStreamsSink[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val outputProperties = applicationProperties.get("ProducerConfigProperties")
KinesisStreamsSink.builder[String]
.setKinesisClientProperties(outputProperties)
.setSerializationSchema(new SimpleStringSchema)
.setStreamName(outputProperties.getProperty(streamNameKey, defaultOutputStreamName))
.setPartitionKeyGenerator((element: String) => String.valueOf(element.hashCode))
.build
}
The application uses the window operator to find the count of values for each stock symbol over a 10-seconds window that slides by 5 seconds. The following code creates the operator and sends the aggregated data to a new Kinesis Data Streams sink:
environment.addSource(createSource)
.map { value =>
val jsonNode = jsonParser.readValue(value, classOf[JsonNode])
new Tuple2[String, Double](jsonNode.get("ticker").toString, jsonNode.get("price").asDouble)
}
.returns(Types.TUPLE(Types.STRING, Types.DOUBLE))
.keyBy(v => v.f0) // Logically partition the stream for each word
.window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)))
.min(1) // Calculate minimum price per ticker over the window
.map { value => value.f0 + String.format(",%.2f", value.f1) + "\n" }
.sinkTo(createSink)
The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object.
The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties.
In this section, you compile and upload your application code to an Amazon S3 bucket.
Compile the Application CodeUse the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises.
To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT:
sbt assembly
If the application compiles successfully, the following file is created:
target/scala-3.2.0/sliding-window-scala-1.0.jar
In this section, you create an Amazon S3 bucket and upload your application code.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket
Enter ka-app-code-<username>
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.
In Configure options, keep the settings as they are, and choose Next.
In Set permissions, keep the settings as they are, and choose Next.
Choose Create bucket.
Choose the ka-app-code-<username>
bucket, and then choose Upload.
In the Select files step, choose Add files. Navigate to the sliding-window-scala-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My Scala test app
.
For Runtime, choose Apache Flink.
Leave the version as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Use the following procedure to configure the application.
To configure the applicationOn the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter sliding-window-scala-1.0.jar.
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following:
Group ID Key ValueConsumerConfigProperties
input.stream.name
ExampleInputStream
ConsumerConfigProperties
aws.region
us-west-2
ConsumerConfigProperties
flink.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group again.
Enter the following:
Group ID Key ValueProducerConfigProperties
output.stream.name
ExampleOutputStream
ProducerConfigProperties
aws.region
us-west-2
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, choose the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
Edit the IAM policy to add permissions to access the Amazon S3 bucket.
To edit the IAM policy to add S3 bucket permissionsOpen the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/sliding-window-scala-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleOutputStream"
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationTo stop the application, on the MyApplication page, choose Stop. Confirm the action.
Create and run the application (CLI)In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications.
Create a permissions policy NoteYou must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace username
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901)
with your account ID.
{
"ApplicationName": "sliding_window",
"ApplicationDescription": "Scala sliding window application",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "sliding-window-scala-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
},
"CloudWatchLoggingOptions": [
{
"LogStreamARN": "arn:aws:logs:us-west-2:012345678901
:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles and then Create Role.
Under Select type of trusted identity, choose AWS Service
Under Choose the service that will use this role, choose Kinesis.
Under Select your use case, choose Managed Service for Apache Flink.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream
policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID.
{
"ApplicationName": "sliding_window",
"ApplicationDescription": "Scala sliding_window application",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "sliding-window-scala-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
},
"CloudWatchLoggingOptions": [
{
"LogStreamARN": "arn:aws:logs:us-west-2:012345678901
:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
}
]
}
Execute the CreateApplication with the following request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication action to start the application.
To start the applicationSave the following JSON code to a file named start_request.json
.
{
"ApplicationName": "sliding_window",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication action to stop the application.
To stop the applicationSave the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "sliding_window"
}
Execute the StopApplication
action with the preceding request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging.
Update environment propertiesIn this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
To update environment properties for the applicationSave the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "sliding_window",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleOutputStream"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action.
NoteTo load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section.
{
"ApplicationName": "sliding_window",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU"
}
}
}
}
}
Clean up AWS resources
This section includes procedures for cleaning up AWS resources created in the sliding Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
Starting from version 1.15 Flink is Scala free. Applications can now use the Java API from any Scala version. Flink still uses Scala in a few key components internally but doesn't expose Scala into the user code classloader. Because of that, users need to add Scala dependencies into their jar-archives.
For more information about Scala changes in Flink 1.15, see Scala Free in One Fifteen.
In this exercise, you will create a simple streaming application which uses Scala 3.2.0 and Flink's Java DataStream API. The application reads data from Kinesis stream, aggregates it using sliding windows and writes results to S3.
NoteTo set up required prerequisites for this exercise, first complete the Getting Started (Scala) exercise. You only need to create an additional folder data/
in the Amazon S3 bucket ka-app-code-<username>.
The Python application code for this example is available from GitHub. To download the application code, do the following:
Install the Git client if you haven't already. For more information, see Installing Git.
Clone the remote repository with the following command:
git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-examples.git
Navigate to the amazon-kinesis-data-analytics-java-examples/scala/S3Sink
directory.
Note the following about the application code:
A build.sbt
file contains information about the application's configuration and dependencies, including the Managed Service for Apache Flink libraries.
The BasicStreamingJob.scala
file contains the main method that defines the application's functionality.
The application uses a Kinesis source to read from the source stream. The following snippet creates the Kinesis source:
private def createSource: FlinkKinesisConsumer[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val inputProperties = applicationProperties.get("ConsumerConfigProperties")
new FlinkKinesisConsumer[String](inputProperties.getProperty(streamNameKey, defaultInputStreamName),
new SimpleStringSchema, inputProperties)
}
The application also uses a StreamingFileSink to write to an Amazon S3 bucket:`
def createSink: StreamingFileSink[String] = {
val applicationProperties = KinesisAnalyticsRuntime.getApplicationProperties
val s3SinkPath = applicationProperties.get("ProducerConfigProperties").getProperty("s3.sink.path")
StreamingFileSink
.forRowFormat(new Path(s3SinkPath), new SimpleStringEncoder[String]("UTF-8"))
.build()
}
The application creates source and sink connectors to access external resources using a StreamExecutionEnvironment object.
The application creates source and sink connectors using dynamic application properties. Runtime application's properties are read to configure the connectors. For more information about runtime properties, see Runtime Properties.
In this section, you compile and upload your application code to an Amazon S3 bucket.
Compile the Application CodeUse the SBT build tool to build the Scala code for the application. To install SBT, see Install sbt with cs setup. You also need to install the Java Development Kit (JDK). See Prerequisites for Completing the Exercises.
To use your application code, you compile and package it into a JAR file. You can compile and package your code with SBT:
sbt assembly
If the application compiles successfully, the following file is created:
target/scala-3.2.0/s3-sink-scala-1.0.jar
In this section, you create an Amazon S3 bucket and upload your application code.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose Create bucket
Enter ka-app-code-<username>
in the Bucket name field. Add a suffix to the bucket name, such as your user name, to make it globally unique. Choose Next.
In Configure options, keep the settings as they are, and choose Next.
In Set permissions, keep the settings as they are, and choose Next.
Choose Create bucket.
Choose the ka-app-code-<username>
bucket, and then choose Upload.
In the Select files step, choose Add files. Navigate to the s3-sink-scala-1.0.jar
file that you created in the previous step.
You don't need to change any of the settings for the object, so choose Upload.
Your application code is now stored in an Amazon S3 bucket where your application can access it.
Create and run the application (console)Follow these steps to create, configure, update, and run the application using the console.
Create the applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
On the Managed Service for Apache Flink dashboard, choose Create analytics application.
On the Managed Service for Apache Flink - Create application page, provide the application details as follows:
For Application name, enter MyApplication
.
For Description, enter My java test app
.
For Runtime, choose Apache Flink.
Leave the version as Apache Flink version 1.15.2 (Recommended version).
For Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Choose Create application.
When you create a Managed Service for Apache Flink application using the console, you have the option of having an IAM role and policy created for your application. Your application uses this role and policy to access its dependent resources. These IAM resources are named using your application name and Region as follows:
Policy: kinesis-analytics-service-
MyApplication
-us-west-2
Role: kinesisanalytics-
MyApplication
-us-west-2
Use the following procedure to configure the application.
To configure the applicationOn the MyApplication page, choose Configure.
On the Configure application page, provide the Code location:
For Amazon S3 bucket, enter ka-app-code-
.<username>
For Path to Amazon S3 object, enter s3-sink-scala-1.0.jar
.
Under Access to application resources, for Access permissions, choose Create / update IAM role kinesis-analytics-MyApplication-us-west-2
.
Under Properties, choose Add group.
Enter the following:
Group ID Key ValueConsumerConfigProperties
input.stream.name
ExampleInputStream
ConsumerConfigProperties
aws.region
us-west-2
ConsumerConfigProperties
flink.stream.initpos
LATEST
Choose Save.
Under Properties, choose Add group.
Enter the following:
Group ID Key ValueProducerConfigProperties
s3.sink.path
s3a://ka-app-code-<user-name>
/data
Under Monitoring, ensure that the Monitoring metrics level is set to Application.
For CloudWatch logging, choose the Enable check box.
Choose Update.
When you choose to enable Amazon CloudWatch logging, Managed Service for Apache Flink creates a log group and log stream for you. The names of these resources are as follows:
Log group: /aws/kinesis-analytics/MyApplication
Log stream: kinesis-analytics-log-stream
Edit the IAM policy to add permissions to access the Amazon S3 bucket.
To edit the IAM policy to add S3 bucket permissionsOpen the IAM console at https://console.aws.amazon.com/iam/.
Choose Policies. Choose the kinesis-analytics-service-MyApplication-us-west-2
policy that the console created for you in the previous section.
On the Summary page, choose Edit policy. Choose the JSON tab.
Add the highlighted section of the following policy example to the policy. Replace the sample account IDs (012345678901
) with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:Abort*",
"s3:DeleteObject*",
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::ka-app-code-<username>",
"arn:aws:s3:::ka-app-code-<username>/*"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:012345678901
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:012345678901
:stream/ExampleInputStream"
}
]
}
The Flink job graph can be viewed by running the application, opening the Apache Flink dashboard, and choosing the desired Flink job.
Stop the applicationTo stop the application, on the MyApplication page, choose Stop. Confirm the action.
Create and run the application (CLI)In this section, you use the AWS Command Line Interface to create and run the Managed Service for Apache Flink application. Use the kinesisanalyticsv2 AWS CLI command to create and interact with Managed Service for Apache Flink applications.
Create a permissions policy NoteYou must create a permissions policy and role for your application. If you do not create these IAM resources, your application cannot access its data and log streams.
First, you create a permissions policy with two statements: one that grants permissions for the read action on the source stream, and another that grants permissions for write actions on the sink stream. You then attach the policy to an IAM role (which you create in the next section). Thus, when Managed Service for Apache Flink assumes the role, the service has the necessary permissions to read from the source stream and write to the sink stream.
Use the following code to create the AKReadSourceStreamWriteSinkStream
permissions policy. Replace username
with the user name that you used to create the Amazon S3 bucket to store the application code. Replace the account ID in the Amazon Resource Names (ARNs) (012345678901)
with your account ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadCode",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::ka-app-code-username
/getting-started-scala-1.0.jar"
]
},
{
"Sid": "DescribeLogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:*"
]
},
{
"Sid": "DescribeLogStreams",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:*"
]
},
{
"Sid": "PutLogEvents",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-west-2:123456789012
:log-group:/aws/kinesis-analytics/MyApplication:log-stream:kinesis-analytics-log-stream"
]
},
{
"Sid": "ReadInputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:123456789012
:stream/ExampleInputStream
"
},
{
"Sid": "WriteOutputStream",
"Effect": "Allow",
"Action": "kinesis:*",
"Resource": "arn:aws:kinesis:us-west-2:123456789012
:stream/ExampleOutputStream
"
}
]
}
For step-by-step instructions to create a permissions policy, see Tutorial: Create and Attach Your First Customer Managed Policy in the IAM User Guide.
Create an IAM roleIn this section, you create an IAM role that the Managed Service for Apache Flink application can assume to read a source stream and write to the sink stream.
Managed Service for Apache Flink cannot access your stream without permissions. You grant these permissions via an IAM role. Each IAM role has two policies attached. The trust policy grants Managed Service for Apache Flink permission to assume the role, and the permissions policy determines what Managed Service for Apache Flink can do after assuming the role.
You attach the permissions policy that you created in the preceding section to this role.
To create an IAM roleOpen the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles and then Create Role.
Under Select type of trusted identity, choose AWS Service
Under Choose the service that will use this role, choose Kinesis.
Under Select your use case, choose Managed Service for Apache Flink.
Choose Next: Permissions.
On the Attach permissions policies page, choose Next: Review. You attach permissions policies after you create the role.
On the Create role page, enter MF-stream-rw-role
for the Role name. Choose Create role.
Now you have created a new IAM role called MF-stream-rw-role
. Next, you update the trust and permissions policies for the role
Attach the permissions policy to the role.
NoteFor this exercise, Managed Service for Apache Flink assumes this role for both reading data from a Kinesis data stream (source) and writing output to another Kinesis data stream. So you attach the policy that you created in the previous step, Create a Permissions Policy.
On the Summary page, choose the Permissions tab.
Choose Attach Policies.
In the search box, enter AKReadSourceStreamWriteSinkStream
(the policy that you created in the previous section).
Choose the AKReadSourceStreamWriteSinkStream
policy, and choose Attach policy.
You now have created the service execution role that your application uses to access resources. Make a note of the ARN of the new role.
For step-by-step instructions for creating a role, see Creating an IAM Role (Console) in the IAM User Guide.
Create the applicationSave the following JSON code to a file named create_request.json
. Replace the sample role ARN with the ARN for the role that you created previously. Replace the bucket ARN suffix (username) with the suffix that you chose in the previous section. Replace the sample account ID (012345678901) in the service execution role with your account ID.
{
"ApplicationName": "s3_sink",
"ApplicationDescription": "Scala tumbling window application",
"RuntimeEnvironment": "FLINK-1_15",
"ServiceExecutionRole": "arn:aws:iam::012345678901:role/MF-stream-rw-role",
"ApplicationConfiguration": {
"ApplicationCodeConfiguration": {
"CodeContent": {
"S3ContentLocation": {
"BucketARN": "arn:aws:s3:::ka-app-code-username
",
"FileKey": "s3-sink-scala-1.0.jar"
}
},
"CodeContentType": "ZIPFILE"
},
"EnvironmentProperties": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"s3.sink.path" : "s3a://ka-app-code-<username>/data"
}
}
]
}
},
"CloudWatchLoggingOptions": [
{
"LogStreamARN": "arn:aws:logs:us-west-2:012345678901
:log-group:MyApplication:log-stream:kinesis-analytics-log-stream"
}
]
}
Execute the CreateApplication with the following request to create the application:
aws kinesisanalyticsv2 create-application --cli-input-json file://create_request.json
The application is now created. You start the application in the next step.
Start the applicationIn this section, you use the StartApplication action to start the application.
To start the applicationSave the following JSON code to a file named start_request.json
.
{{
"ApplicationName": "s3_sink",
"RunConfiguration": {
"ApplicationRestoreConfiguration": {
"ApplicationRestoreType": "RESTORE_FROM_LATEST_SNAPSHOT"
}
}
}
Execute the StartApplication
action with the preceding request to start the application:
aws kinesisanalyticsv2 start-application --cli-input-json file://start_request.json
The application is now running. You can check the Managed Service for Apache Flink metrics on the Amazon CloudWatch console to verify that the application is working.
Stop the applicationIn this section, you use the StopApplication action to stop the application.
To stop the applicationSave the following JSON code to a file named stop_request.json
.
{
"ApplicationName": "s3_sink"
}
Execute the StopApplication
action with the preceding request to stop the application:
aws kinesisanalyticsv2 stop-application --cli-input-json file://stop_request.json
The application is now stopped.
Add a CloudWatch logging optionYou can use the AWS CLI to add an Amazon CloudWatch log stream to your application. For information about using CloudWatch Logs with your application, see Setting Up Application Logging.
Update environment propertiesIn this section, you use the UpdateApplication action to change the environment properties for the application without recompiling the application code. In this example, you change the Region of the source and destination streams.
To update environment properties for the applicationSave the following JSON code to a file named update_properties_request.json
.
{"ApplicationName": "s3_sink",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"EnvironmentPropertyUpdates": {
"PropertyGroups": [
{
"PropertyGroupId": "ConsumerConfigProperties",
"PropertyMap" : {
"aws.region" : "us-west-2",
"stream.name" : "ExampleInputStream",
"flink.stream.initpos" : "LATEST"
}
},
{
"PropertyGroupId": "ProducerConfigProperties",
"PropertyMap" : {
"s3.sink.path" : "s3a://ka-app-code-<username>/data"
}
}
]
}
}
}
Execute the UpdateApplication
action with the preceding request to update environment properties:
aws kinesisanalyticsv2 update-application --cli-input-json file://update_properties_request.json
When you need to update your application code with a new version of your code package, you use the UpdateApplication CLI action.
NoteTo load a new version of the application code with the same file name, you must specify the new object version. For more information about using Amazon S3 object versions, see Enabling or Disabling Versioning.
To use the AWS CLI, delete your previous code package from your Amazon S3 bucket, upload the new version, and call UpdateApplication
, specifying the same Amazon S3 bucket and object name, and the new object version. The application will restart with the new code package.
The following sample request for the UpdateApplication
action reloads the application code and restarts the application. Update the CurrentApplicationVersionId
to the current application version. You can check the current application version using the ListApplications
or DescribeApplication
actions. Update the bucket name suffix (<username>) with the suffix that you chose in the Create dependent resources section.
{
"ApplicationName": "s3_sink",
"CurrentApplicationVersionId": 1,
"ApplicationConfigurationUpdate": {
"ApplicationCodeConfigurationUpdate": {
"CodeContentUpdate": {
"S3ContentLocationUpdate": {
"BucketARNUpdate": "arn:aws:s3:::ka-app-code-username
",
"FileKeyUpdate": "s3-sink-scala-1.0.jar",
"ObjectVersionUpdate": "SAMPLEUehYngP87ex1nzYIGYgfhypvDU"
}
}
}
}
}
Clean up AWS resources
This section includes procedures for cleaning up AWS resources created in the Tumbling Window tutorial.
Delete your Managed Service for Apache Flink applicationOpen the Managed Service for Apache Flink console at https://console.aws.amazon.com/flink
in the Managed Service for Apache Flink panel, choose MyApplication.
In the application's page, choose Delete and then confirm the deletion.
Open the Kinesis console at https://console.aws.amazon.com/kinesis.
In the Kinesis Data Streams panel, choose ExampleInputStream.
In the ExampleInputStream page, choose Delete Kinesis Stream and then confirm the deletion.
In the Kinesis streams page, choose the ExampleOutputStream, choose Actions, choose Delete, and then confirm the deletion.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Choose the ka-app-code-<username>
bucket.
Choose Delete and then enter the bucket name to confirm deletion.
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation bar, choose Policies.
In the filter control, enter kinesis.
Choose the kinesis-analytics-service-MyApplication-us-west-2 policy.
Choose Policy Actions and then choose Delete.
In the navigation bar, choose Roles.
Choose the kinesis-analytics-MyApplication-us-west-2 role.
Choose Delete role and then confirm the deletion.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
In the navigation bar, choose Logs.
Choose the /aws/kinesis-analytics/MyApplication log group.
Choose Delete Log Group and then confirm the deletion.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4